Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens – TechCrunch

The worlds of artificial intelligence and cybersecurity have become deeply entwined in recent years, as organizations work to keep up with and ideally block increasingly sophisticated malicious hackers. Today, a startup thats built a deep learning solution that it claims can both identify and stop even viruses that have yet to be identified has raised a large round of funding from some big strategic partners.

Deep Instinct, which uses deep learning both to learn how to identify and stop known viruses and other hacking techniques, as well as to be able to identify completely new approaches that have not been identified before, has raised $43 million in a Series C.

The funding is being led by Millennium New Horizons, with Unbound (a London-based investment firm founded by Shravin Mittal), LG and Nvidia all participating. The investment brings the total raised by Deep Instinct to $100 million, with HP and Samsung among its previous backers. The tech companies are all strategics, in that (as in the case of HP) they bundle and resell Deep Instincts solutions, or use them directly in their own services.

The Israeli-based company is not disclosing valuation, but notably, it is already profitable.

Targeting as-yet unknown viruses is becoming a more important priority as cybercrime grows. CEO and founder Guy Caspi notes that currently there are more than350,000 new machine-generated malware created every day with increasingly sophisticated evasion techniques, such as zero-days and APTs (Advanced Persistent Threats). Nearly two-thirds of enterprises have been compromised in the past year by new and unknown malware attacks originating at endpoints, representing a 20% increase from the previous year, he added. And zero-day attacks are now four times more likely to compromise organizations. Most cyber solutions on the market cant protect against these new types of attacks and have therefore shifted to a detect-response approach, he said, which by design means that they assume a breach will happen.

While there is already a large profusion of AI-based cybersecurity tools on the market today, Caspi notes that Deep Instinct takes a critically different approach because of its use of deep neural network algorithms, which essentially are set up to mimic how a human brain thinks.

Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity, he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates. Theres been a growing body of research that supports this idea, although weve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway).

He adds that deep learning is the only AI-basedautonomous system that can learn from any raw data, as its not limited by an experts technological knowledge. In other words, its not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.

This also means that the system can be used in turn across a number of different end points. Many machine learning-based cybersecurity solutions, he notes, are geared at Windows environments. That is somewhat logical, given that Windows and Android account for the vast majority of attacks these days, but cross-OS attacks are now on the rise.

While Deep Instinct specializes in preventing first-seen, unknown cyberattacks like APTs and zero-day attacks, Caspi notes that in the past year there has been a rise in both the amount and the impact of cyberattacks covering other areas. In 2019, Deep Instinct saw an increase in spyware and ransomware on top of an increase in the level of sophistication of the attacks that are being used, specifically with more file-less attacks using scripts and powershell, living off the land attacks and the use of weaponized documents like Microsoft Office files and PDFs. These sit alongside big malware attacks like Emotet, Trickbot, New ServeHelper and Legion Loader.

Today the company sells services both directly and via partners (like HP), and its mainly focused on enterprise users. But since there is very little in the way of technical implementation (Our solution is mostly autonomous and all processes are automated [and] deep learning brain is handling most of the security, Caspi said), the longer-term plan is to build a version of the product that consumers could adopt, too.

With a large part of antivirus software often proving futile in protecting users against attacks these days, that could come as a welcome addition to the market, despite how crowded it already is.

There is no shortage of cybersecurity software providers, yet no company aside from Deep Instinct has figured out how to apply deep learning to automate malware analysis, said Ray Cheng, partner at Millennium New Horizons, in a statement. What excites us most about Deep Instinct is its proven ability to use its proprietary neural network to effectively detect viruses and malware no other software can catch. That genuine protection in an age of escalating threats, without the need of exorbitantly expensive or complicated systems is a paradigm change.

Visit link:
Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens - TechCrunch

How Machine Learning Will Reshape The Future Of Investment Management – Forbes India

Image: ShutterstockThe 2020 outlook for Asset Management re-affirms impact of globalization and outperformance of private equity. While the developed worlds economy has sent mixed signals, all eyes are now on Asia and especially India, to drive the next phase of growth. The goal is to provide Investment Solutions for its mix of young as well as senior population. Its diversity cultural, economic, regional & regulatory, will pose the next challenge.

The application of Data Science & Machine Learning has delivered value for portfolio managers through quick and uniform decision-making. Strategic Beta Funds which have consistently generated added value, rely heavily on the robustness of their portfolio creation models which are excruciatingly data driven. Deploying Machine Learning algorithms helps assess credit worthiness of firms and individuals for lending and borrowing. Data Science and Machine Learning solutions eliminate human bias and calculation errors while evaluating investments in an optimum period.

Investment management is justified as an industry only to the extent that it can demonstrate a capacity to add value through the design of dedicated investor-centric investment solutions, as opposed to one-size-fits-all manager-centric investment products. After several decades of relative inertia, the much needed move towards investment solutions has been greatly facilitated by a true industrial revolution taking place in investment management, triggered by profound paradigm changes with the emergence of novel approaches such as factor investing, liability-driven and goal-based investing, as well as sustainable investing. Data science is expected to play an increasing role in these transformations.

This trend poses a critical challenge to global academic institutions: educating a new breed of young professionals and equipping them with the right skills to address the situation, and who could seize the fast-developing new job opportunities in this field. Continuous education gives the opportunity to meet with new challenges of this ever-changing world, especially in the investment industry.

As recently emphasized by our colleague Vijay Vaidyanathan, CEO, Optimal Asset Management, former EDHEC Business School PHD student, and online course instructor at EDHEC Business School, our financial well-being is second only to our physical well-being, and one of the key challenges we face is to enhance financial expertise. To achieve this, we cannot limit ourselves to the relatively small subset of the population who can afford to invest the significant time and expense of attending a formal, full-time degree programme on a university campus. Therefore, we must find ways to elevate the quality of financial professional financial education to ensure that all asset managers and asset owners are fully equipped to make intelligent and well-informed investment decisions.

Data science applied to asset management, and education in the field, is expected to affect not only investment professionals but also individuals. On this topic, we would like to share insights from Professor John Mulvey, Princeton University, who is also one of EDHEC on-line course instructors. John believes that machine learning applied to investment management is a real opportunity to assist individuals with their financial affairs in an integrated manner. Most people are faced with long-term critical decisions about saving, spending, and investing to achieve a wide variety of goals.

These decisions are often made without much professional guidance (except for wealthier clients), and without much technical training. Current personalized advisors are reasonable initial steps. Much more can be done in this area with modern data science and decision-making tools. Plus, younger people are more willing to trust fully automated computational systems. This domain is one of the most relevant and significant areas of development for future investment management.

By Nilesh Gaikwad, EDHEC Business School country manager in India, and Professor Lionel Martellini, EDHEC-Risk Institute Director.

Original post:
How Machine Learning Will Reshape The Future Of Investment Management - Forbes India

How AI Is Tracking the Coronavirus Outbreak – WIRED

With the coronavirus growing more deadly in China, artificial intelligence researchers are applying machine-learning techniques to social media, web, and other data for subtle signs that the disease may be spreading elsewhere.

The new virus emerged in Wuhan, China, in December, triggering a global health emergency. It remains uncertain how deadly or contagious the virus is, and how widely it might have already spread. Infections and deaths continue to rise. More than 31,000 people have now contracted the disease in China, and 630 people have died, according to figures released by authorities there Friday.

John Brownstein, chief innovation officer at Harvard Medical School and an expert on mining social media information for health trends, is part of an international team using machine learning to comb through social media posts, news reports, data from official public health channels, and information supplied by doctors for warning signs the virus is taking hold in countries outside of China.

The program is looking for social media posts that mention specific symptoms, like respiratory problems and fever, from a geographic area where doctors have reported potential cases. Natural language processing is used to parse the text posted on social media, for example, to distinguish between someone discussing the news and someone complaining about how they feel. A company called BlueDot used a similar approachminus the social media sourcesto spot the coronavirus in late December, before Chinese authorities acknowledged the emergency.

We are moving to surveillance efforts in the US, Brownstein says. It is critical to determine where the virus may surface if the authorities are to allocate resources and block its spread effectively. Were trying to understand whats happening in the population at large, he says.

The rate of new infections has slowed slightly in recent days, from 3,900 new cases on Wednesday to 3,700 cases on Thursday to 3,200 cases on Friday, according to the World Health Organization. Yet it isnt clear if the spread is really slowing or if new infections are simply becoming more difficult to track.

So far, other countries have reported far fewer cases of coronavirus. But there is still widespread concern about the virus spreading. The US has imposed a travel ban on China even though experts question the effectiveness and ethics of such a move. Researchers at Johns Hopkins University have created a visualization of the viruss progress around the world based on official numbers and confirmed cases.

Health experts did not have access to such quantities of social, web, and mobile data when seeking to track previous outbreaks such as severe acute respiratory syndrome (SARS). But finding signs of the new virus in a vast soup of speculation, rumor, and posts about ordinary cold and flu symptoms is a formidable challenge. The models have to be retrained to think about the terms people will use and the slightly different symptom set, Brownstein says.

Even so, the approach has proven capable of spotting a coronavirus needle in a haystack of big data. Brownstein says colleagues tracking Chinese social media and news sources were alerted to a cluster of reports about a flu-like outbreak on December 30. This was shared with the WHO, but it took time to confirm the seriousness of the situation.

Beyond identifying new cases, Brownstein says the technique could help experts learn how the virus behaves. It may be possible to determine the age, gender, and location of those most at risk more quickly than using official medical sources.

Alessandro Vespignani, a professor at Northeastern University who specializes in modeling contagion in large populations, says it will be particularly challenging to identify new instances of the coronavirus from social media posts, even using the most advanced AI tools, because its characteristics still arent entirely clear. Its something new. We dont have historical data, Vespignani says. There are very few cases in the US, and most of the activity is driven by the media, by peoples curiosity.

Excerpt from:
How AI Is Tracking the Coronavirus Outbreak - WIRED

From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times – Science Magazine

Emulators speed up simulations, such as this NASA aerosol model that shows soot from fires in Australia.

By Matthew HutsonFeb. 12, 2020 , 2:35 PM

Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take many hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence (AI) can easily produce accurate emulators that can accelerate simulations across all of science by billions of times.

This is a big deal, says Donald Lucas, who runs climate simulations at Lawrence Livermore National Laboratory and was not involved in the work. He says the new system automatically creates emulators that work better and faster than those his team designs and trains, usually by hand. The new emulators could be used to improve the models they mimic and help scientists make the best of their time at experimental facilities. If the work stands up to peer review, Lucas says, It would change things in a big way.

A typical computer simulation might calculate, at each time step, how physical forces affect atoms, clouds, galaxieswhatever is being modeled. Emulators, based on a form of AI called machine learning, skip the laborious reproduction of nature. Fed with the inputs and outputs of the full simulation, emulators look for patterns and learn to guess what the simulation would do with new inputs. But creating training data for them requires running the full simulation many timesthe very thing the emulator is meant to avoid.

The new emulators are based on neural networksmachine learning systems inspired by the brains wiringand need far less training. Neural networks consist of simple computing elements that link into circuitries particular for different tasks. Normally the connection strengths evolve through training. But with a technique called neural architecture search, the most data-efficient wiring pattern for a given task can be identified.

The technique, called Deep Emulator Network Search (DENSE), relies on a general neural architecture search co-developed by Melody Guan, a computer scientist at Stanford University. It randomly inserts layers of computation between the networks input and output, and tests and trains the resulting wiring with the limited data. If an added layer enhances performance, its more likely to be included in future variations. Repeating the process improves the emulator. Guan says its exciting to see her work used toward scientific discovery. Muhammad Kasim, a physicist at the University of Oxford who led the study, which was posted on the preprint server arXiv in January, says his team built on Guans work because it balanced accuracy and efficiency.

The researchers used DENSE to develop emulators for 10 simulationsin physics, astronomy, geology, and climate science. One simulation, for example, models the way soot and other atmospheric aerosols reflect and absorb sunlight, affecting the global climate. It can take a thousand of computer-hours to run, so Duncan Watson-Parris, an atmospheric physicist at Oxford and study co-author, sometimes uses a machine learning emulator. But, he says, its tricky to set up, and it cant produce high-resolution outputs, no matter how many data you give it.

The emulators that DENSE created, in contrast, excelled despite the lack of data. When they were turbocharged with specialized graphical processing chips, they were between about 100,000 and 2 billion times faster than their simulations. That speedup isnt unusual for an emulator, but these were highly accurate: In one comparison, an astronomy emulators results were more than 99.9% identical to the results of the full simulation, and across the 10 simulations the neural network emulators were far better than conventional ones. Kasim says he thought DENSE would need tens of thousands of training examples per simulation to achieve these levels of accuracy. In most cases, it used a few thousand, and in the aerosol case only a few dozen.

Its a really cool result, said Laurence Perreault-Levasseur, an astrophysicist at the University of Montreal who simulates galaxies whose light has been lensed by the gravity of other galaxies. Its very impressive that this same methodology can be applied for these different problems, and that they can manage to train it with so few examples.

Lucas says the DENSE emulators, on top of being fast and accurate, have another powerful application. They can solve inverse problemsusing the emulator to identify the best model parameters for correctly predicting outputs. These parameters could then be used to improve full simulations.

Kasim says DENSE could even enable researchers to interpret data on the fly. His team studies the behavior of plasma pushed to extreme conditions by a giant x-ray laser at Stanford, where time is precious. Analyzing their data in real timemodeling, for instance, a plasmas temperature and densityis impossible, because the needed simulations can take days to run, longer than the time the researchers have on the laser. But a DENSE emulator could interpret the data fast enough to modify the experiment, he says. Hopefully in the future we can do on-the-spot analysis.

Read more:
From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times - Science Magazine

Machine Learning Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 – Instant Tech News

New Jersey, United States, The report titled, Machine Learning Market Size and Forecast 2026 in Verified Market Research offers its latest report on the global Machine Learning market that includes comprehensive analysis on a range of subjects like competition, segmentation, regional expansion, and market dynamics. The report sheds light on future trends, key opportunities, top regions, leading segments, the competitive landscape, and several other aspects of the Machine Learning market. Get access to crucial market information. Market players can use the report back to peep into the longer term of the worldwide Machine Learning market and convey important changes to their operating style and marketing tactics to realize sustained growth.

Global Machine Learning Market was valued at USD 2.03 Billion in 2018 and is projected to reach USD 37.43 Billion by 2026, growing at a CAGR of 43.9% from 2019 to 2026.

Get | Download Sample Copy @https://www.verifiedmarketresearch.com/download-sample/?rid=6487&utm_source=ITN&utm_medium=002

Top 10 Companies in the Global Machine Learning Market Research Report:

Global Machine Learning Market: Competitive Landscape

Competitive landscape of a market explains strategies incorporated by key players of the market. Key developments and shift in management in the recent years by players has been explained through company profiling. This helps readers to understand the trends that will accelerate the growth of market. It also includes investment strategies, marketing strategies, and product development plans adopted by major players of the market. The market forecast will help readers make better investments.

Global Machine Learning Market: Drivers and Restrains

This section of the report discusses various drivers and restrains that have shaped the global market. The detailed study of numerous drivers of the market enable readers to get a clear perspective of the market, which includes market environment, government policies, product innovations, breakthroughs, and market risks.

The research report also points out the myriad opportunities, challenges, and market barriers present in the Global Machine Learning Market. The comprehensive nature of the information will help the reader determine and plan strategies to benefit from. Restrains, challenges, and market barriers also help the reader to understand how the company can prevent itself from facing downfall.

Global Machine Learning Market: Segment Analysis

This section of the report includes segmentation such as application, product type, and end user. These segmentations aid in determining parts of market that will progress more than others. The segmentation analysis provides information about the key elements that are thriving the specific segments better than others. It helps readers to understand strategies to make sound investments. The Global Machine Learning Market is segmented on the basis of product type, applications, and its end users.

Global Machine Learning Market: Regional Analysis

This part of the report includes detailed information of the market in different regions. Each region offers different scope to the market as each region has different government policy and other factors. The regions included in the report are North America, South America, Europe, Asia Pacific, and the Middle East. Information about different region helps the reader to understand global market better.

Ask for Discount @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=6487&utm_source=ITN&utm_medium=002

Table of Content

1 Introduction of Machine Learning Market

1.1 Overview of the Market 1.2 Scope of Report 1.3 Assumptions

2 Executive Summary

3 Research Methodology of Verified Market Research

3.1 Data Mining 3.2 Validation 3.3 Primary Interviews 3.4 List of Data Sources

4 Machine Learning Market Outlook

4.1 Overview 4.2 Market Dynamics 4.2.1 Drivers 4.2.2 Restraints 4.2.3 Opportunities 4.3 Porters Five Force Model 4.4 Value Chain Analysis

5 Machine Learning Market, By Deployment Model

5.1 Overview

6 Machine Learning Market, By Solution

6.1 Overview

7 Machine Learning Market, By Vertical

7.1 Overview

8 Machine Learning Market, By Geography

8.1 Overview 8.2 North America 8.2.1 U.S. 8.2.2 Canada 8.2.3 Mexico 8.3 Europe 8.3.1 Germany 8.3.2 U.K. 8.3.3 France 8.3.4 Rest of Europe 8.4 Asia Pacific 8.4.1 China 8.4.2 Japan 8.4.3 India 8.4.4 Rest of Asia Pacific 8.5 Rest of the World 8.5.1 Latin America 8.5.2 Middle East

9 Machine Learning Market Competitive Landscape

9.1 Overview 9.2 Company Market Ranking 9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview 10.1.2 Financial Performance 10.1.3 Product Outlook 10.1.4 Key Developments

11 Appendix

11.1 Related Research

Request Customization of Report Complete Report is Available @ https://www.verifiedmarketresearch.com/product/global-machine-learning-market-size-and-forecast-to-2026/?utm_source=ITN&utm_medium=002

Highlights of Report

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Analysts with high expertise in data gathering and governance utilize industry techniques to collate and examine data at all stages. Our analysts are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research reports.

Contact Us:

Mr. Edwyne Fernandes Call: +1 (650) 781 4080 Email: [emailprotected]

TAGS: Machine Learning Market Size, Machine Learning Market Growth, Machine Learning Market Forecast, Machine Learning Market Analysis, Machine Learning Market Trends, Machine Learning Market

Read more from the original source:
Machine Learning Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 - Instant Tech News

Manchester Digital unveils 72% growth for digital businesses in the region – Education Technology

Three quarters of Greater Manchester's digital tech businesses have experienced significant growth in the last 12 months

New figures from Manchester Digital, the independent trade body for digital and tech businesses in Greater Manchester, have revealed that 72% of businesses in the region have experienced growth in the last year, up from 54% in 2018.

Despite such prosperous results, companies are still calling out for talent, with developer roles standing out as the most in-demand for the seventh consecutive year. The other most sought-after skills in the next three years include data science (15%), UX (15%), and AI and machine learning (11%).

In the race to acquire top talent, almost 25% of Manchester vacancies advertised in the last 12 months remained unfilled, largely due to a lack of suitable candidates and inflated salary demands.

Unveiled at Manchester Digitals annual Skills Festival last week, the Annual Skills Audit, which evaluates data from 250 digital and tech companies and employees across the region, also analysed the various professional pathways into the sector.

The majority (77%) of candidates entering the sector harbour a degree of some sort; however, of the respondents who possessed a degree, almost a quarter claimed it was not relevant to tech, while a further 22% reported traversing through the sector from another career.

In other news: Jisc report calls for an end to pen and paper exams by 2025

On top of this, almost one in five respondents said they had self-taught or upskilled their way into the sector a positive step towards boosting diversity in terms of both the people and experience pools entering the sector.

Its positive to see a higher number of businesses reporting growth this year, particularly from SMEs. While the political and economic landscape is by no means settled, it seems that businesses have strategies in place to help them navigate through this uncertainty, said Katie Gallagher, managing director of Manchester Digital.

Whats particularly interesting in this years audit are the data sets around pathways into the tech sector, added Gallagher. While a lot of people still do report having degrees and wed like to see more variation here in terms of more people taking up apprenticeships, work experience placements etc. its interesting to see that a fair percentage are retraining, self-training or moving to the sector with a degree thats not directly related. Only by creating a talent pool from a wide and diverse range of people and backgrounds can we ensure that the sector continues to grow and thrive sustainably.

When asked what they liked about working for their current employer, employees across the region mentioned flexible work as the number one perk they value (40%). Career progression was also a crucial factor to those aged 18-21, with these respondents also identifying brand prestige as a reason to choose a particular employer.

For this first time this year, weve expanded the Skills Audit to include opinions from employees, as well as businesses. With the battle for talent still one of the biggest challenges employers face, were hoping that this part of the data set provides some valuable insights into why people choose employers and what they value most and consequently helps businesses set successful recruitment and retention strategies, Gallagher concluded.

See the original post here:
Manchester Digital unveils 72% growth for digital businesses in the region - Education Technology

Quantum Computing: How To Invest In It, And Which Companies Are Leading the Way? – Nasdaq

Insight must precede application. ~ Max Planck, Father of Quantum Physics

Quantum computing is no ordinary technology. It has attracted huge interest at the national level with funding from governments. Today, some of the biggest technology giants are working on the technology, investing substantial sums into research and development and collaborating with state agencies and corporates for various projects across industries.

Heres an overview of quantum computing as well as the players exploring this revolutionary technology, and ways to invest in it.

Understanding Quantum Computing

Lets begin with understanding quantum computing. While standard computers are built on classical bits, every quantum computer has a qubit or quantum bit as its building block. Thus, unlike a classical computer where information is stored as binary 0 or 1 using bits, a quantum computer harnesses the unique ability of subatomic participles in the form of a qubit which can exist in superposition of 0 and 1 at the same time.As a result, quantum computers can achieve higher information density and handle very complex operations at speeds exponentially higher than conventional computers while consuming much lessenergy.

It is believed that quantum computing will have a huge impact on areas such as logistics, military affairs, pharmaceuticals (drug design and discovery), aerospace (designing), utilities (nuclear fusion), financial modeling, chemicals (polymer design), Artificial Intelligence (AI), cybersecurity, fault detection, Big Data, and capital goods, especially digital manufacturing. The productivity gains by end users of quantum computing, in the form of both cost savings and revenue opportunities, are expected to surpass $450 billion annually.

It will be a slow build for the next few years: we anticipate value for end users in these sectors to reach a relatively modest $2 billion to $5 billion by 2024. But value will then increase rapidly as the technology and its commercial viability mature,reportsBCG.

The market for quantum computing isprojectedto reach $64.98 billion by 2030 from just $507.1 million in 2019, growing at aCAGR of 56.0%during the forecast period (20202030).According to aCIRestimate, revenue from quantum computing is pegged at $8 billion by 2027.

Which Nations Are Investing In Quantum Computing?

To gain the quantum advantage, China has been at the forefront of the technology. The first quantum satellite was launched by China in 2016. Apaperby The Center for a New American Security (CNAS) highlights how, China is positioning itself as a powerhouse in quantum science.

Understanding the strategic potential that quantum science holds, U.S., Germany, Russia, India and European Union have intensified efforts towards quantum computing. In the U.S., President Trumpestablishedthe National Quantum Initiative Advisory Committee in 2019 in accordance with the National Quantum Initiative Act, signed into law in late 2018, which authorizes $1.2 billion to be spent on the quantum science over the next five years.

The Indian government in its 2020 budget has announced a National Mission on Quantum Technologies & Applications with a total budgetoutlayof 8000 crore ($1.12 billion) for a period of five years while Europe has a 1 billioninitiativeproviding funding for the entire quantum value chain over the next ten years. In October 2019, the first prototype of a quantum computer waslaunchedin Russia while in Germany, the Fraunhofer-Gesellschaft, Europes leading organization for applied research,partneredwith IBM for advance research in the field of quantum computing.

The Companies Leading the Way

IBM has been one of the pioneers in the field of quantum computing. In January 2019, IBM (IBM)unveiledthe IBM Q System One, the world's first integrated universal approximatequantum computing system designed for scientific and commercial use. In September itopenedthe IBM quantum computation center in New York to expand its quantum computing systems for commercial and research activity. It has also recentlyinvestedin Cambridge Quantum Computing, which was one of the first startups to become a part of IBMs Q Network in 2018.

In October 2019, Google (GOOG,GOOGL) made anannouncementclaiming the achievement of "quantum supremacy."It published the results of this quantum supremacy experiment in theNaturearticle, Quantum Supremacy Using a Programmable Superconducting Processor.The term "quantum supremacy" wascoinedin 2012 by John Preskill. He wrote, one way to achieve quantum supremacy would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers. The claim wascounteredby IBM.

Vancouver, Canada headquartered D-Wave is the worlds first commercial supplier of quantum computers and its systems are being used by organizations such as NEC, Volkswagen, DENSO, Lockheed Martin, USRA, USC, Los Alamos National Laboratory and Oak Ridge National Laboratory.In February 2019, D-Waveannounceda preview of its next-generation quantum computing platform incorporating hardware, software and tools to accelerate and ease the delivery of quantum computing applications. In September 2019, itnamedits next-generation quantum system as Advantage, which will be available in the Leap quantum cloud service in mid-2020.In December 2019, the companysignedan agreement with NEC to accelerate commercial quantum computing.

Amazon (AMZN)introducedits service Amazon basket in late 2019, which is designed to let its users get some hands-on experience with qubits and quantum circuits. It allows to build and test circuits in a simulated environment and then run them on an actual quantum computer.

Around the same time, Intel (INTC)unveileda first-of-its-kind cryogenic control chip code-named Horse Ridgethat will speed up development of full-stack quantum computing systems.

In addition, companies such as Microsoft (MSFT), Alibaba (BABA), Tencent (TCEHY), Nokia (NOK), Airbus, HP (HPQ), AT&T (T) Toshiba, Mitsubishi, SK Telecom, Raytheon, Lockheed Martin, Righetti, Biogen, Volkswagen and Amgen are researching and working on applications of quantum computing.

Final Word

Investors looking to invest in the technology can either look at individual stocks or consider Defiance Quantum ETF (QTUM) to take exposure to companies developing and applying quantum computing and other advanced technologies. Launched in April 2018, QTUM is a liquid, low-cost and efficient way to invest in the technology. The ETF tracks the BlueStar Quantum and Machine Learning Index, which tracks approximately 60 globally listed stocks across all market capitalizations.

While quantum computing is not mainstream yet, the quest to harness its potential is on, and the constant progress made is shrinking the gap between research labs and real-world applications.

Disclaimer: The author has no position in any stocks mentioned. Investors shouldconsider the above information not as a de facto recommendation, but as an idea for further consideration. The report has been carefully prepared, and any exclusions or errors in reporting are unintentional.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Read the rest here:
Quantum Computing: How To Invest In It, And Which Companies Are Leading the Way? - Nasdaq

White House reportedly aims to double AI research budget to $2B – TechCrunch

The White House is pushing to dedicate an additional billion dollars to fund artificial intelligence research, effectively doubling the budget for that purpose outside of Defense Department spending, Reuters reported today, citing people briefed on the plan. Investment in quantum computing would also receive a major boost.

The 2021 budget proposal would reportedly increase AI R&D funding to nearly $2 billion, and quantum to about $860 million, over the next two years.

The U.S. is engaged in what some describe as a race with China in the field of AI, though unlike most races this one has no real finish line. Instead, any serious lead means opportunities in business and military applications that may grow to become the next globe-spanning monopoly, a la Google or Facebook which themselves, as quasi-sovereign powers, invest heavily in the field for their own purposes.

Simply doubling the budget isnt a magic bullet to take the lead, if anyone can be said to have it, but deploying AI to new fields is not without cost and an increase in grants and other direct funding will almost certainly enable the technology to be applied more widely. Machine learning has proven to be useful for a huge variety of purposes and for many researchers and labs is a natural next step but expertise and processing power cost money.

Its not clear how the funds would be disbursed; Its possible existing programs like federal Small Business Innovation Research awards could be expanded with this topic in mind, or direct funding to research centers like the National Labs could be increased.

Research into quantum computing and related fields is likewise costly. Googles milestone last fall of achieving quantum superiority, or so the claim goes, is only the beginning for the science and neither the hardware nor software involved have much in the way of precedents.

Furthermore quantum computers as they exist today and for the foreseeable future have very few valuable applications, meaning pursuing them is only an investment in the most optimistic sense. However, government funding via SBIR and grants like those are intended to de-risk exactly this kind of research.

The proposed budget for NASA is also expected to receive a large increase in order to accelerate and reinforce various efforts within the Artemis Moon landing program. It was not immediately clear how these funds would be raised or from where they would be reallocated.

Here is the original post:
White House reportedly aims to double AI research budget to $2B - TechCrunch

Enterprise hits and misses – quantum gets real, Koch buys Infor, and Shadow’s failed app gets lit up – Diginomica

Lead story - Quantum computing - risks, opportunities and use cases - by Chris Middleton

MyPOV: Master-of-the-edgy-think-piece Chris Middleton unfurled a meaty two-parter on the realities of quantum computing. As a quantum computing fan boy and a proud quantum-changes-everything association member curmudgeon, I was glad to see Chris take this on.

In Quantum tech - big opportunities from (very, very) little things, he reminds us that pigeonholing quantum as "computing" is a mistake:

Quantum technology embraces a host of different systems, each of which could form a fast-expanding sector of its own if investors shift their focus away from computing. These include quantum timing, metrology, and navigation, such as the development of hyper-accurate, portable atomic clocks.

Each use case carries its own risks/opportunities, and need for transparency, particularly when you combine quantum and "AI." However, based on the recent sessions he attended, Chris says we should think of quantum as enhancing our tool kit rather than replacing classic computing outright. He concludes:

In business and technology, we see a world of big objects and quantifiable opportunities, and it is far from clear how the quantum realm relates to it though it is clear that it does. In short, investors, policymakers, and business leaders need something tangible and relatable before they reach for their credit cards.

Translation quantum computing is so 2021 (or maybe 2025). But I find middle ground with the hypesters: we'd better start talking about the implications now. Quantum computing has a far greater inevitability than say, enterprise blockchains.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Bears might be hibernating, but enterprise software vendors sure aren't napping:

Koch buys Infor: When Infor's CFO Kevin Samuelson took over the CEO role from Charles Phillips, many felt that the pending Infor IPO was in play. Well, many were wrong. Derek was on the case:

Infor to be acquired by Koch Industries - whats the likely impact? and the follow-on: Infor answers questions on Koch acquisition. The big question here, to me, isn't why Koch versus IPO. It's CloudSuite SaaS adoption. And which industries can Infor address via SaaS industry ERP? Derek's pieces give us important clues - and we'll we watching.

Google breaks out cloud earnings: ordinarily, earning reports are not watershed moments. But this was the first time "Alphabet" broke out Google Cloud (and YouTube) numbers. Google is obviously wary of the AWS and Azure comparisons. But it's not easy to break it all out anyhow (Google added GSuite revenues in also). Stuart parses it out inGoogle's 'challenger' cloud business hits $10 billion annual run rate as Alphabet breaks out the numbers for the first time.

SAP extends Business Suite maintenance to 2030 (with caveats): Arguably the biggest SAP story since the leadership change. Den had some questions stuck in his craw things to say, so he unfurled a two-parter:

MyPOV: a smart move - though an expected one - for the SAP new leadership team, with the user groups heavily involved in pushing the case. However, the next smart moves will be a lot tougher.

More vendor analysis:

And if that's not enough, Brian's got a Zoho review, I filed an Acumatica use case on SaaS best-of-breed, and Stuart crunched a landmark Zendesk earnings report.

Jon's grab bag - My annual productivity post is up and out; plus I took gratuitous shots at linkbaity Slack-has-ruined-work headlines (Personal productivity 2020 - Slack and Microsoft Teams didn't ruin work - but they didn't fix work either).

Neil explains the inexplicable in The problem of AI explainability - can we overcome it? Finally, I'm glad Jerry addressed the Clearview AI bottom-feeders in Clearview AI - super crime fighter or the death of privacy as we know it? There's a special place in my personal Hades for greedy entrepreneurs who steal faces, drape their motives in totally bogus 1st amendment claims, and plan to sell said data to authoritarian regimes. These bozos make robocallers look like human rights activists.

Lead story - analyzing the wreckage of the Iowa caucus tech fail

MyPOV: This could probably just be the whiffs section. The Iowa caucus app failure is very much like this: if you and I wrote down a step-by-step plan on how to screw up a mission-critical app launch, with everything from poor user engagement to technical failure to lack of contingencies to hacking vulnerabilities (which fortunately were not exploited), we've have this mess.

Hits/misses reader Clive reckons this is the best post-mortem: Shadow Inc. CEO Iowa Interview: 'We Feel Really Terrible' . First off, don't feel terrible, just go away. Shovel snow, or get involved in a local recycling initiative. Make a pinball app. Just stay away from the future of democracy from now on. Then there's this doozy: An 'Off-the-Shelf, Skeleton Project': Experts Analyze the App That Broke Iowa. Tell me if this sounds like something that would go smoothly:

To properly login and submit results, caucus chairs had to enter a precinct ID number, a PIN code, and a two-factor identification code, each of which were six-digits long.

Then there's the IDP, which was warned not to use the app by at least one party, and went headlong into their own abyss. Fortunately, there are a few lessons we can extract. Such as this one from Greg Miller, co-founder of the Open Source Election Technology Institute, which warned the IDP not to use the app weeks ago:

Our message is that apps like this should be developed in the sunlight and part of an open bug bounty.

An ironic message for an app developer named Shadow...

Honorable mention

I got a terrifying college flashback when I saw this one: Note targeting 'selfish' bongo player at Glastonbury Tor demands he stops playing. This prankster brought us back to the future though: Berlin artist uses 99 phones to trick Google into traffic jam alert.

In my line of work, we joke about PR hacks over-achievers pogo sticks pros "circling back", as if a second blast will somehow polish the turd of a crummy pitch as it slinkers by - well, this takes the noxious act of circling back to another level: Family Gets 55,000 Duplicate Letters from Loan Company. But hey, it's not all crash-and-burn here:

I can't let this slide another week:

I think we all realize by now that "free" services are all about data hucksters gorging themselves on the sweet nectar of our personal lives selling us out to the highest bidder. But when an anti-virus company gets it on the action, surely the Idiocracy has been achieved: "To make matters worse, Avast seems to maintain a lukewarm stance on the issue."

I'd like to invite the Avast team to step into my fiery cauldron. The only thing that's lukewarm is your grasping business model and your mediocre adware, err, I mean, anti-virus protection. Just one question: who protects us from you? As for Liz:

I'm with ya, Ms. Miller. Hopefully this is the next best thing....

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed. 'myPOV' is borrowed with reluctant permission from the ubiquitous Ray Wang.

Read this article:
Enterprise hits and misses - quantum gets real, Koch buys Infor, and Shadow's failed app gets lit up - Diginomica

For the tech world, New Hampshire is anyone’s race – Politico

With help from John Hendel, Cristiano Lima, Leah Nylen and Katy Murphy

Editors Note: This edition of Morning Tech is published weekdays at 10 a.m. POLITICO Pro Technology subscribers hold exclusive early access to the newsletter each morning at 6 a.m. Learn more about POLITICO Pros comprehensive policy intelligence coverage, policy tools and services, at politicopro.com.

Advertisement

If Sanders wins in New Hampshire: If the polls hold true, the tech world may see a ton more heat from the Vermont senator, who has long been critical of tech giants market power and labor practices.

Trumps 2021 funding requests: President Donald Trumps 2021 budget proposal would give big funding boosts to artificial intelligence and quantum computing, as well as the Commerce Departments NTIA and the Justice Departments antitrust division, but not to the FTC or FCC.

Bipartisanship at risk?: House Judiciarys Republican leaders say recent comments from the Democratic chairman about Silicon Valley giants threatens the panels tech antitrust probe, a rare point of bipartisanship in a hotly divided Congress.

ITS TUESDAY, AND ALL EYES ARE ON THE FIRST PRESIDENTIAL PRIMARY OF 2020: NEW HAMPSHIRE. WELCOME TO MORNING TECH! Im your host, Alexandra Levine.

Got a news tip? Write Alex at alevine@politico.com or @Ali_Lev. An event for our calendar? Send details to techcalendar@politicopro.com. Anything else? Full team info below. And dont forget: add @MorningTech and @PoliticoPro on Twitter.

WHAT NEW HAMPSHIRE MEANS FOR TECH A week after winning the most votes in Iowa, Sen. Bernie Sanders (I-Vt.) is polling first in New Hampshire, with Pete Buttigieg a close-second. (Further behind, and mostly neck-and-neck, are Elizabeth Warren, Joe Biden and Amy Klobuchar.) What could this mean for the tech world? Just about anything.

But if the Vermont senator prevails in tonights Democratic presidential primary, we can expect to hear more of his usual anti-Amazon commentary (Sanders has repeatedly criticized Amazons labor practices and complained that the online giant pays zero in taxes); more break up big tech talk (Sanders has said he would absolutely look to break up tech companies like Amazon, Google and Facebook); and more attacks on corporate power and influence (he has proposed taxing tech giants based on how big a gap exists between the salaries of their CEOs and their mid-level employees).

Several prime tech policy issues are also fair game: Sanders criminal justice reform plan includes a ban on law enforcements use of facial recognition technology, and he has spoken out about tech's legal liability shield, Section 230 debates that are playing out (often, with fireworks) at the federal level. (Further reading in POLITICO Magazine: Is it Bernies Party Now?)

Plus: Could New Hampshire be the next Iowa? State and local election officials running this primary without apps (voters will cast their ballots on paper, which in some cases will be counted by hand) say no. POLITICOs Eric Geller provides the birds-eye view.

Heres everything you need to know about the 2020 race in New Hampshire.

BUDGET DISPATCH: HUGE JUMP FOR DOJ ANTITRUST, NO BIG CHANGES FOR FCC AND FTC The White House on Monday rolled out its fiscal year 2021 funding requests, including a proposed 71 percent bump in congressional spending on the Justice Departments antitrust division an increase that, as Leah reports, is another indicator that the agency is serious about its pending investigations into tech giants like Google and Facebook. (It would also allow the agency to hire 87 additional staffers.)

In contrast, the FCC and FTC arent requesting any big changes in their funding or staffing. The FCC is seeking $343 million, up 1.2 percent from its 2020 funding level, while the FTC is asking for a little over $330 million, which is about $800,000 less than its current funding. The FCC noted its on track to move to its new Washington headquarters in June, while FTC Commissioner Rebecca Slaughter, a Democrat, objected to the request for her agency, saying in a statement that it does not accurately reflect the funding the FTC needs to protect consumers and promote competition.

Artificial intelligence and quantum computing would also receive big funding boosts under the budget proposal, Nancy reports. So would the Commerce Departments NTIA, to help prepare the agency for 5G and other technological changes, as John reported for Pros.

IS THE BIPARTISAN TECH ANTITRUST PROBE IN JEOPARDY? The House Judiciary Committees investigation into competition in the tech sector which garnered rare bipartisan momentum in a hotly divided Congress could now be in trouble. On Monday night, the committees Republican leaders criticized Democratic Chairman Jerry Nadlers recent remarks railing against the power of Silicon Valley giants, writing in a letter that Nadlers comments "have jeopardized" the panel's "ability to perform bipartisan work." Spokespeople for Nadler did not offer comment. A Cicilline spokesperson declined comment.

The dust-up marks the first major sign of fracturing between House Judiciary Republicans and Democrats over their bipartisan investigation into possible anti-competitive conduct in the tech industry a probe widely seen as one of Silicon Valleys biggest threats on Capitol Hill, Cristiano reports in a new dispatch. The dispute could threaten the push to advance bipartisan antitrust legislation in the House, something House Judiciary antitrust Chairman David Cicilline (D-R.I.) has said the committee plans to do early this year.

T-MOBILE-SPRINT WIN T-Mobile and Sprint can merge, a federal judge is expected to rule today, rejecting a challenge by California, New York and other state attorneys general, Leah reports. U.S. District Judge Victor Marrero is expected to release his hotly anticipated decision on the $26.2 billion telecom megadeal later this morning.

FCCS FUTURE-OF-WORK FOCUS Amazon, AT&T, Walmart, LinkedIn and Postmates are among the tech companies expected at a future-of-work event today that Democratic FCC Commissioner Geoffrey Starks is hosting at the agencys headquarters.

The public roundtable will address the same kinds of issues that several Democratic presidential candidates have raised, such as concerns about AIs effect on labor economies. Issues of #5G, #InternetInequality, automation & education are colliding in ways that will impact all Americans, Starks wrote on Twitter. Eager to host this important policy discussion!

CCPA UPDATE: GET ME REWRITE! California Attorney General Xavier Becerra on Monday published a business-friendly tweak to his proposed Privacy Act regulations, a change that his office said had been inadvertently omitted from a revised draft unveiled on Friday.

Only businesses that collect, sell or share the information of at least 10 million Californians per year thats about 1 in 4 residents would have to report annual statistics about CCPA requests and how quickly they responded to privacy-minded consumers, under the change. That threshold was originally 4 million.

The update will come as a relief to companies that no longer need to pull back the curtain on their Privacy Act responsiveness. Its also good news for procrastinators, as the new deadline for submitting comments on the AGs rules was pushed back a day to Feb. 25.

TECH QUOTE DU JOUR Senate Judiciary antitrust Chairman Mike Lee (R-Utah) offered colorful praise on Monday for Sen. Josh Hawleys (R-Mo.) proposal to have the Justice Department absorb the FTC, a plan aimed in part at addressing concerns over the FTCs enforcement of antitrust standards in the technology sector.

Having two federal agencies in charge of enforcing antitrust law makes as much sense as having two popes, Lee told MT in an emailed statement. This is an issue weve had hearings on in the Judiciary Committee and I think Sen. Hawley has identified a productive and constitutionally sound way forward. (Hawleys proposal swiftly drew pushback from one industry group, NetChoice, which said it would make political abuse more likely.")

The state of play: Some Republicans in the GOP-led Senate now want to reduce the number of regulators overseeing competition in the digital marketplace. A small contingent of House Democrats wants to create a new federal enforcer to police online privacy. But a vast majority of the discussions happening on Capitol Hill around those issues have so far focused on ways to empower the FTC, not downgrade it.

Mike Hopkins, chairman of Sony Pictures Television, is joining Amazon as a senior vice president overseeing Amazons Prime video platform and movie and television studios.

AB 5 blow: Uber and Postmates on Monday lost the first round in their challenge to Californias new worker classification law, POLITICO reports.

Uber IPO fallout: As tax season begins, some of Uber's earliest employees are realizing they had little idea how their stock grants worked and are now grappling with the fallout on their tax bills after last May's disappointing IPO, Protocol reports.

JEDI latest: Amazon wants Trump and Defense Secretary Mark Esper to testify in its lawsuit against the Pentagon over the award of the multibillion-dollar JEDI cloud computing contract to Microsoft, POLITICO reports.

ICYMI: Federal prosecutors announced charges Monday against four Chinese intelligence officers for hacking the credit-reporting giant Equifax in one of the largest data breaches in history, POLITICO reports.

Facebook ad tracker: New Hampshire saw more than $1 million in Facebook spending in the month leading up to todays presidential primary, Zach Montellaro reports for Pros.

Can privacy be a piece of cake?: A privacy app called Jumbo presents a startling contrast to the maze of privacy controls presented by companies like Facebook, Twitter and Google, Protocol reports heres how it works, and how it plans turn a buck.

Virus watch: Following Amazons lead, Sony and NTT are pulling out of this months Mobile World Congress in Barcelona as a precaution during the coronavirus outbreak, Reuters reports.

In profile: Zapata Computing, a startup that creates software for quantum computers by avoiding as much as possible actually using a quantum machine, Protocol reports.

Out today: Alexis Wichowski, New York Citys deputy chief technology director and a professor at Columbias School of International and Public Affairs, is out today with The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World, a book published by HarperCollins.

Tips, comments, suggestions? Send them along via email to our team: Bob King (bking@politico.com, @bkingdc), Mike Farrell (mfarrell@politico.com, @mikebfarrell), Nancy Scola (nscola@politico.com, @nancyscola), Steven Overly (soverly@politico.com, @stevenoverly), John Hendel (jhendel@politico.com, @JohnHendel), Cristiano Lima (clima@politico.com, @viaCristiano), Alexandra S. Levine (alevine@politico.com, @Ali_Lev), and Leah Nylen (lnylen@politico.com, @leah_nylen).

TTYL.

Read the original:
For the tech world, New Hampshire is anyone's race - Politico

NASA Soars and Others Plummet in Trump’s Budget Proposal – Scientific American

US research on artificial intelligence (AI) and quantum computing would see dramatic boosts in funding for 2021, under a proposed budget released by the White House on 10 February. Thebudget requestissued by President Donald Trump makes cuts across most science agencies for the 2021 fiscal year, which begins on 1 October 2020. Although Congress has repeatedly rebuffed such requests for cutsand has, in fact, increased science spending in the enacted budgetsthe 132-page document from the White House offers a view into the administrations priorities and ambitions leading up to the November election.

Among US agencies that fund and conduct research, NASA would see big gains. The National Science Foundation (NSF), National Institutes of Health (NIH) and Department of Energy (DOE), among others, are slated for budget reductions.

Trump is being Trump, says Michael Lubell, a physicist at the City College of New York who tracks federal science-policy issues. All of Trumps budgets have sought to slash funding for the US research enterprise, but he has yet to convince lawmakers on Capitol Hill, Lubell says. He can ask for what he wants, but it doesnt mean its going to happen.

Under the presidents request, NASA would get US$25.2 billion for fiscal year 2021, a jump of nearly 12% over funding enacted by Congress for the current year. The money is meant to jump-start the administrations plans to send astronauts to the Moon by the end of 2024. The request includes $3.4 billion to develop lunar landers that could carry humans. Last year, lawmakers granted $600 million towards developing such landersless than half of what the White House asked for.

Under the banner of a Moon-to-Mars strategy, the presidents request also includes $529 million for robotic exploration of Mars. That would include bringing back a set of rock samples that will be collected by a rover slated to launch in July, and developing an ice-mapping mission to gather information for future landing sites.

NASAs Science Mission Directorate, which funds external research projects and partners, would receive $6.3 billion, which is the same amount proposed by the White House last year but would be a nearly 12% decrease from what Congress allocated. As in previous years, the presidents request aims to cancel NASAs next flagship space telescope, the Wide Field Infrared Survey Telescope, as well as the planned Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) and Climate Absolute Radiance and Refractivity Observatory (CLARREO) Pathfinder Earth-science missions. Also on the proposed chopping block is the Stratospheric Observatory for Infrared Astronomy (SOFIA), a telescope that flies aboard a jumbo jet. Congress has rejected those requested cuts in past years.

The presidents budget proposes $38.7 billion for the NIH, about a 7% cut on the current level of $41.7 billion. The proposal is consistent with past White House budget requests; last year, the administration requested a $5-billion cut. As in the past 2 years, the budget proposes creating a new $335-million NIH institute, the National Institute for Research on Safety and Quality, to replace the Agency for Healthcare Research and Quality at the Department of Health and Human Services. Also, as part of the administrations broader push to use and develop AI across sectors, the White House allocates $50 million of its proposed NIH budget for the study of chronic diseases using AI.

The White House proposal seeks a total of $7.7 billion for the NSF for fiscal year 2021, a decrease of more than $500 million from the enacted 2020 budget. This includes a 6% decrease in funding for research and development.

The presidents request includes reductions to six of the NSFs seven research directorates, including cuts of more than $100 million each for biological sciences and engineering. Computer and information science and engineering would be the only major research area to see an increase in its funding, consistent with the administrations plans to prioritize AI and quantum computing. These two areas will receive a combined $1 billion of the NSF budget under the presidents proposal. The NSF budget also includes $50 million for workforce development, with a focus on community colleges, historically black colleges and universities (HBCUs) and other minority-serving institutions. But the budget calls for deep cuts to other diversity-focused initiatives, such as the HBCU Excellence in Research programme

Proposed cuts of more than 10% would slash the budgets for geoscience research, the Office of International Science and Engineering and the Office of Polar Programs, which maintains the US research presence in the Arctic and Antarctic.

Tim Clancy, the president of Arch Street, a consulting company in Alexandria, Virginia, with a focus on federal science policy, says that although Congress has typically rejected Trumps proposed cuts to science funding, strict budget caps this year might mean that legislators will have to make difficult decisions about cutting programmes in order to free up money for the presidents AI and quantum initiatives.

The budget would provide $5.8 billion for the DOEs Office of Science, a drop of nearly 17% from 2020 levels. The office would see sharp decreases across its portfolio, which spans biological and environmental research, fusion and high-energy physics. Only the advanced scientific computing programme, with roughly level funding of $988 million, would escape the cuts.

The White House once again proposed slashing funding for clean-energy research. The popular Advanced Research Projects Agency-Energy (ARPA-E)which received a record $425 million last yearwould be eliminated, and the office of energy efficiency and renewable energy would see its budget slashed by roughly 74%. Funding for fossil-fuel research and development would drop by less than 3%, to $731 million.

The proposal faces long odds on Capitol Hill, where lawmakers have balked at such cuts. Last year, for instance, the administration sought to cut the Office of Sciences budget by nearly 16%; Congress responded by nudging the total up 6%, to a record $7 billion.

The White House is once again seeking to drastically cut funding for the Environmental Protection Agency (EPA), which would see its budget drop by roughly 26%, to $6.7 billion. The budget would provide just $478 million for science and technology, a decrease of 33%. But Congress has repeatedly rejected the administrations attempts to cut funding for the EPA, whose budget has increased since Trump entered the White House.

The National Oceanic and Atmospheric Administration (NOAA) would receive more than $4.6 billion, a drop of 14%. The core science budget in the Office of Oceanic and Atmospheric Research would fall by more than 40% to $327 million, although Congress rejected a similar cut last year. The administration has once again proposed eliminating the National Sea Grant College Program, which promotes research into the conservation and sustainable development of marine resources, and which Congress has thus far maintained. The budget would provide $188 million for sea-floor mapping and exploration efforts along the US coasts.

This article is reproduced with permission and was first published on February 10 2020.

Read the rest here:
NASA Soars and Others Plummet in Trump's Budget Proposal - Scientific American

Rochester scientists receive NSF CAREER awards – University of Rochester

February 11, 2020

The National Science Foundation (NSF) has granted its most prestigious award in support of junior faculty, theFaculty Early Career Development (CAREER) award, to several University of Rochester researchers this year.

The NSF CAREER award is given to promising scientists early in their careers and recognizes outstanding research, excellent education, and the integration of education and research. The award also comes with a federal grant toward their research and education activities.

Pierre Gourdain, an assistant professor of physics, will study the formation and evolution of plasma jets found around black holes, by conducting scaled-down experiments in the laboratory. Although scientists cannot see black holes directly, they can observe from Earth the plasma jets that black holes produce, which span thousands of light years. Better understanding the mechanisms behind jet formation and acceleration will allow scientists to use the data of the jets dynamics and chemical composition to determine a black holes mass and the type of matter it interacts with. Gourdains award will support his research in studying these mechanisms in the laboratory using scaled-down versions of astrophysical jets generated by the High Amperage Driver for Extreme States (HADES) at the Universitys Laboratory for Laser Energetics (LLE). HADES will form inch-long plasma jets traveling at 50 miles per seconds and will measure plasma properties that will then be used in plasma models. This research will allow astrophysicists to more precisely determine the mass of a black hole, giving them a better grasp of the distribution of dark matter throughout the Universe. Read more about Gourdains project here.

John Nichol, an assistant professor of physics, will study non-equilibrium quantum physics. His research project will focus on phenomena in objects that do not reach thermal equilibrium with their surroundings, such as an imaginary coffee cup that stays hot forever. This research has applications in fields such as high-temperature superconductivity and quantum computing. Another component of Nichols award is developing interactive, week-long courses in experimental physics for middle- and high-school students during the summer and workshops during the school year. These programs will include outreach efforts to involve more women and underrepresented minorities in physics. Nichol will also develop a quantum technology course for undergraduates and is mentoring undergraduate and graduate students in state-of-the-art quantum nanotechnology. Read more about Nichols project here.

William Renninger, an assistant professor of optics, studies the interaction between photonsthe elementary particles of lasers and other forms of lightand phonons, the basic units of acoustic waves caused by vibrating materials. Renningers CAREER award will support his research in coupling light waves and acoustic waves for optomechanical applicationssuch as improving the performance of radio-frequency signal processors in the near term, opening up new possibilities for controlling quantum information in the future, and perhaps even enabling the detection of dark matter. One goal of his project is to explore how acoustic waves could improve the filters used for controlling radio-frequency information carried in optical fibers, increasing the resolution of the information transmitted, and the speed and efficiency of doing so. The award also includes funding to create open source access to information for designing and creating advanced lasers sources generating femtosecond pulses, which are essential tools for time-resolved measurements, biomedical imaging, optogenetics, spectroscopy, distance measurements and more. Read more about Renningers project here.

Stephen Wu, an assistant professor of electrical and computer engineering, will study two-dimensional (2D) materialsas thin as a single layer of atoms. These materials can undergo remarkable transformations when they are stretched and pulled, such as being superconducting one moment to nonconducting the next. Wu will explore these changes when they occur in transistor-scale device platforms, in ways that could transform electronics, optics, computing, and a host of other technologies. For example, researchers are reaching the limits at which the electronic transistors used in computing can be scaled down in size to achieve ever faster, more enhanced performance. Last year, Wus lab demonstrated how using a thin film of two-dimensional molybdenum ditelluride in a device platform performed the same functions as a traditional transistor with far less power consumption, less leakage of current, yet is configured to easily adapt for current electronics. One goal of Wus project is to expand this straintronic concept to higher-endurance, higher-yield operations as well as adding new phases to control. Wus award also includes reaching out to students traditionally underrepresented in STEM fields by connecting with the Eastman School of Music. Examples of activities include running summer educational courses in music and electronics where local 7th to 12th grade students could create unconventional instruments that could be played in live performances. Read more about Wus project here.

NSF CAREER awards provide researchers with five years of funding to help lay the foundation for their future research. But innovative ways to integrate research with the education of students is also a key part of the CAREER program, which recognizes junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organizations.

Tags: announcement, Arts and Sciences, award, Department of Electrical and Computer Engineering, Department of Physics and Astronomy, Hajim School of Engineering and Applied Sciences, Institute of Optics, John Nichol, Laboratory for Laser Energetics, National Science Foundation, Pierre Gourdain, research funding, Stephen Wu, William Renninger

Category: University News

Go here to see the original:
Rochester scientists receive NSF CAREER awards - University of Rochester

Letter:Deputy Is Vindicated | Opinion – Southern Pines Pilot

Finally, the miscarriage of justice perpetrated by District Attorney Maureen Krueger against former deputy sheriff Tracy Carter as part of her 2018 political hit-job against Neil Godfrey has been rectified.

Carter has now been vindicated by the North Carolina Department of Justice and his law enforcement certification fully restored. Thanks to David Sinclair, who dug out the truth and reported it in the Feb. 5 edition of The Pilot, the public record has now been set straight. Thank you, David!

As for the rant against you and The Pilot by Mr. Zumwalt for publishing facts, everything he said is either wrong or just a flat lie and should be totally disregarded. A simple phone call will verify Tracy Carter is currently employed by Montgomery County as a deputy sheriff.

Richard Pitassy, Southern Pines

Publishers Note: This is a letter to the editor, submitted by a reader, and reflects the opinion of the author. The Pilot welcomes letters from readers on its Opinion page, which serves as a public forum. The Pilot is not in the business of suppressing public opinion. We are a forum for community debate, and publish almost every letter we receive. For information on how to make a submission, visit this page:https://www.thepilot.com/site/forms/online_services/letter/

Go here to see the original:
Letter:Deputy Is Vindicated | Opinion - Southern Pines Pilot

BOYS SWIMMING: Chargers take fifth at WCC Championships; look ahead to sections – Crow River Media

The growth of a team can be hard to measure throughout a season. There are many factors that come in, especially in swimming when so much depends on the times.

Last year at the Wright County Conference Championships, the Dassel-Cokato/Litchfield boys swimming team didnt hit 200 points. This time around, the Chargers hit it and then-some, scoring 219 points to finish fifth.

I thought we did amazing today, a lot of drops in time, junior Jackson Resop said. We got a couple of upper placements... that was really good for our team morale.

There were only two top-five finishes for the Chargers. Logan Christopherson came in third in the 100 breaststroke and Russell Wesa took fifth in the 100 backstroke. Christopherson also had a 10th place finish in the 200 IM.

Resop was the other individual swimmer that had top-10 placements. He took sixth in the 50 freestyle and eighth in the 100 butterfly.

All A-team relays had top-10 finishes as well. The medley team of Resop, Christopherson, Joe Carlson, and Jacob Huhn had the best finish with sixth place.

For a young team with no seniors, there was a lot of promising teamwork going on with the Chargers. The team has come a long way and on Saturday it showed up on the sideline.

Everybody was cheering each other on, Christopherson said. There was a lot of team spirit. Overall (it helped) everybody dropped a bunch of time, so thats good.

A conference championship might be cool, but so is doing well in sections and having a chance of making state. Thats where the Chargers find themselves. The goal of the swimming season is to have your best times of the season show up at sections, and hopefully its enough to get you into a state event. With the time drops exhibited Saturday, the Chargers are confident in the direction results are going. But they also know that there is still work that needs to be done.

I think just banking on seeing where were at, and just looking at how we swam today and doing the drill work to improve, Riley Defries said on where he thinks the team needs to improve. If were doing bad in a certain area, work on that at practice the whole time.

But overall, with the results of sections still pending, the Chargers have a lot to be excited about heading into next season with the whole team coming back.

Itll definitely grow and I know were going to have to push ourselves, Resop said. Were going to have to get ready. Were going to have to mentally prepare. But we have the effort, weve just got to put it in.

DC/L next competes in the Section 3A championship beginning with prelims at 5 p.m. Friday, Feb. 21, at Hutchinson. The finals begin at 1:30 Saturday, Feb. 22. Fellow WCC team Hutch will be in attendance at the section meet, along with Monticello, Princeton, Rocori, St. Cloud Apollo, Willmar and Montevideo.

2020 Boys Wright County Conference Championship (Feb. 8)

1. Hutchinson 563, 2. Delano-Watertown-Mayer 514, 3. Orono 371, 4. Waconia 350, 5. Dassel-Coakto/Litchfield 219, 6. Mound Westonka 157

200 medley relay (17): 1. Hutch A (Conner Hogan, Noah Tague, Tristin Nelsen, Dane Thovson) 1:43.13, 6. DCL A (Jackson Resop, Logan Christopherson, Joe Carlson, Jacob Huhn) 1:52.08, 10. DCL B (Max Haataja, Colin Tormanen, Elijah Slinden, Russell Wesa) 2:08.97, DCL C (William Carlson, Justice Borg, Joseph Kotila, Steven Mengelkoch) 2:22.84, 14. DCL D (Elliot Fluck, Nick Pofahl, Aiden Berube, Mick Gallagher) 2:32.57, 16. DCL E (Ben Johnson, Ty Movrich, Evan Johnson, Jack Unze) 2:59.60

200 freestyle (26): 1. Colby Kern (D) 1:48.50, 12. Emmanual Johnson 2:07.59, 14. Riley Defries 2:07.70, 15. Isaiah Kalis 2:09.01, 19. Anders Borg 2:23.53, Elijah Slinden 2:40.98, Zach Stockland 2:59.50

200 IM (20): 1. Josh Johnston (MW) 1:58.90, 10. Christopherson 2:19.87, 15. Joe Carlson 2:27.98, 19. Tormanen 2:52.56

50 freestyle (53): 1. David Sinclair (W) 21.94, 6. Resop 24.39, 11. Wesa 25.91, 14. Huhn 26.44, 20. Gallagher 28.39, Movrich 30.22, William Carlson 30.68, Berube 32.10, Mathias Sliden 32.71, Fluck 33.53, Unze 34.26, Evan Johnson 34.28, Ben Johnson 48.92

1 mtr diving (7): 1. Alex Oestreich 413.60

100 butterfly (15): 1. Samuel Sinclair (W) 54.49, 8. Resop 1:02.41, 12. Joe Carlson 1:05.69

100 freestyle (41): 1. David Sinclair (W) 48.18, 11. Defries 57.75, 18. Huhn 1:01.32, 19. Mengelkoch 1:01.79, 20. Anders Borg 1:03.29, Fluck 1:17.05, Unze 1:17.54, Evan Joohnson 1:20.06, Mathias Slinden 1:20.26, Stockland 1:23.37

500 freestyle (21): 1. Matthew Krogman (W) 5:07.09, 16. Emmanual Johnson 5:48.69, 18. Kalis 6:01.52, 19. Haataja 6:09.02

200 freestyle relay (22): 1. Hutch A (Hogan, Oestreich, Nelsen, Matthew Olberg) 1:33.36, 9. DCL A (Resop, Joe Carlson, Huhn, Emmanual Johnson) 1:43.16, 10. DCL B (Tormanen, Wesa, Mengelkoch, Defries) 1:46.54, 17. DCL D (Elijah Slinden, Stockland, Kotila, Berube) 2:07.87, 21. DCL E (Anders Borg, Evan Johnson, Mathias Slinden, Ben Johnson) 2:35.94, DCL C (Gallagher, Justice Borg, Pofahl, Kalis) DQ

100 backstroke (22): 1. Nick Black (D) 54.23, 5. Wesa 1:06.29, 13. Haataja 1:15.38, 15. William Carlson 1:21.01, Mengelkoch 1:16.88, Movrich 1:28.42

100 breaststroke (27): 1. Johnston (MW) 59.80, 3. Christopherson 1:06.10, 13. Justice Borg 1:20.97, 14. Tormanen 1:21.64, 15. Pofahl 1:22.06, Kotila 1:33.24

400 freestyle relay (17): 1. Waconia A (Krogman, Samuel Sinclair, Nathan Sannito, David Sinclair) 3:24.08, 8. DCL A (Defries, Emmanual Johnson, Kalis, Christopherson) 3:53.86, 13. DCL B (Anders Borg, Haataja, Gallagher, Justice Borg) 4:22.94, 16. DCL C (Elijah Slinden, Berube, Pofahl, William Carlson) 4:54.77, 17. DCL D (Stockland, Unze, Movrich, Kotila) 5:06.27

Here is the original post:
BOYS SWIMMING: Chargers take fifth at WCC Championships; look ahead to sections - Crow River Media

Is soaking in a frozen lake the secret to good health? – The Detroit News

Richard Chin, Star Tribune (minneapolis) Published 5:55 p.m. ET Feb. 11, 2020

Ponce de Leons search for the fountain of youth in Florida is just a legend.

But about 1,500 miles to the north, in the icy waters of Cedar Lake in Minneapolis, dozens of people think theyve found the next best thing.

On a recent Sunday around 9:30 a.m., a diverse group of about 20 people dressed in swimsuits trekked to a spot near the shore on the west side of the lake and immersed themselves in an 8-by-12-foot rectangular hole cut in the ice. Later in the day, another group of people gathered to do the same thing.

This isnt a once-a-year, get-in, get-out, New Years Day plunge for Instagram bragging rights.

Throughout the winter, biohackers maintain a hole in the ice chopped into Cedar Lake in Minneapolis in the belief that regular cold water immersions make them healthier.(Photo: Richard Tsong-Taatarii / TNS)

This is something that happens every Sunday throughout the winter.

Some people come several times a week, and stay for a good, long soak of five, 10, 15 minutes or more. Except for the knit hats, they look like they could be relaxing in a hot tub as they stand in water that ranges from waist- to neck-deep.

Called cold therapy or cold thermogenesis, ice-water bathing is a practice that biohackers and assorted others believe makes them healthier.

The Twin Cities Cold Thermogenesis Facebook group, which was created in 2016, claims the frigid dips do everything from increase testosterone in men to boosting brown adipose tissue. (The so-called brown fat or good fat may be helpful in combating obesity because it burns calories to create heat.)

Cold-water immersion also strengthens the immune system, according to Svetlana Vold, a part-time firefighter and ultramarathon winter bike racer from St. Louis Park, who organizes the Sunday morning cold-immersion session.

Vold and others say chilling out in the water combats inflammation, helps them sleep better and improves their focus and endurance. Some said theyre inspired by Wim The Iceman Hof, a Dutchman famous for his breathing and cold exposure technique called the Wim Hof Method.

The Cedar Lake group would probably meet the approval of David Sinclair, a Harvard genetics professor and longevity expert who thinks that cold exposure may help slow the aging process.

Maria OConnell, the organizer of the afternoon session, has been immersing herself in an ice-filled horse trough in her backyard since 2011. Initially its a little uncomfortable, she said. You end up getting better the more you do it.

But many say the frigid dunks are a mood-altering, even pleasurable experience.

It hurts so damn good, said Stephen McLaughlin, a 61-year-old Minneapolis resident. You are just completely present.

It makes me happy. I think its adrenaline, said Allison Kuznia, 42, of Minneapolis.

Its kind of a treat to go out and get really cold, said Nick White, 46, of Minneapolis. It gives you a feeling of euphoria.

Read or Share this story: https://www.detroitnews.com/story/life/wellness/2020/02/11/soaking-frozen-lake-secret-good-health/41217451/

Link:
Is soaking in a frozen lake the secret to good health? - The Detroit News

Kelty Hearts 4-1 Caledonian Braves: Third straight defeat for Braves – Motherwell Times

Caledonian Braves losing streak stretched to three games as they fell to a 4-1 defeat at Lowland League leaders Kelty Hearts on Saturday, writes Roy Campbell.

Early on, Braves Ross McNeil capitalised on a mix up in the Kelty defence but couldnt keep his knock on from crossing over and out of the pitch.

Nathan Austin then forced Alex Marshall into a great double save to deny the leagues top goalscorer McNeil an early opener.

Marshall was again called upon as he kept out Stephen Husbands fizzed effort before making a remarkable save from a Matty Flynn volley.

The effort was only six yards from goal and hit with pace but Marshy was able to get the slightest of touches to put it onto the bar.

McLaughlin was next to try from distance but fell short before Kelty would open the scoring.

The ball was sent forward to Austin who had time to turn from the byline inside the box and unleash a fantastic strike which thundered into the back of the net.

Craig Quinn came close just before half-time but his shot headed straight at the keeper.

Into the second half and there was another Kelty goal. Austin rose highest in the box and his downward header went into the bottom corner of the net.

Following this Kelty would stroll to the victory. Dylan Easton popped up next with the third.

A great turn on the edge of the box preceded a shot which deflected off David Sinclair and nestled into the opposite corner.

A fourth came a minute later. The Braves defence attempted to play Austin offside but the Englishman had time and space to loop the ball over the oncoming Marshall to grab his hat-trick.

Substitute Serge Makofo did however grab a consolation goal for the Braves.

Makofo showed great work-rate to win the ball from Thomas Scobbie on the byline, cut inside and slot past the Kelty keeper.

This was a tale of two halves as the Braves fell to defeat in Fife.

Braves welcome Civil Service to Alliance Park this Saturday 8, 3pm kick-off.

View original post here:
Kelty Hearts 4-1 Caledonian Braves: Third straight defeat for Braves - Motherwell Times

The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.

TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.

Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.

Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.

In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.

Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.

Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.

In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?

Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.

This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.

Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.

This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.

Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.

Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.

In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.

In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.

Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.

This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.

His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.

TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.

Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.

Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.

AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.

Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.

Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.

Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.

In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

For more AI and machine learning TED talks, browse TEDs complete topic collection.

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See original here:
The 17 Best AI and Machine Learning TED Talks for Practitioners - Solutions Review

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats need to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Originally posted here:
Overview of causal inference in machine learning - Ericsson

Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – Lexology

Introduction

This article is the first of a five-part series of articles dealing with what patentability of machine learning looks like in 2019. This article begins the series by describing the USPTOs 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) in the context of the U.S. patent system. Then, this article and the four following articles will describe one of five cases in which Examiners rejections under Section 101 were reversed by the PTAB under this new 2019 PEG. Each of the five cases discussed deal with machine-learning patents, and may provide some insight into how the 2019 PEG affects the patentability of machine-learning, as well as software more broadly.

Patent Eligibility Under the U.S. Patent System

The US patent laws are set out in Title 35 of the United States Code (35 U.S.C.). Section 101 of Title 35 focuses on several things, including whether the invention is classified as patent-eligible subject matter. As a general rule, an invention is considered to be patent-eligible subject matter if it falls within one of the four enumerated categories of patentable subject matter recited in 35 U.S.C. 101 (i.e., process, machine, manufacture, or composition of matter).[1] This, on its own, is an easy hurdle to overcome. However, there are exceptions (judicial exceptions). These include (1) laws of nature; (2) natural phenomena; and (3) abstract ideas. If the subject matter of the claimed invention fits into any of these judicial exceptions, it is not patent-eligible, and a patent cannot be obtained. The machine-learning and software aspects of a claim face 101 issues based on the abstract idea exception, and not the other two.

Section 101 is applied by Examiners at the USPTO in determining whether patents should be issued; by district courts in determining the validity of existing patents; in the Patent Trial and Appeal Board (PTAB) in appeals from Examiner rejections, in post-grant-review (PGR) proceedings, and in covered-business-method-review (CBM) proceedings; and in the Federal Circuit on appeals. The PTAB is part of the USPTO, and may hear an appeal of an Examiners rejection of claims of a patent application when the claims have been rejected at least twice.

In determining whether a claim fits into the abstract idea category at the USPTO, the Examiners and the PTAB must apply the 2019 PEG, which is described in the following section of this paper. In determining whether a claim is patent-ineligible as an abstract idea in the district courts and the Federal Circuit, however, the courts apply the Alice/Mayo test; and not the 2019 PEG. The definition of abstract idea was formulated by the Alice and Mayo Supreme Court cases. These two cases have been interpreted by a number of Federal Circuit opinions, which has led to a complicated legal framework that the USPTO and the district courts must follow.[2]

The 2019 PEG

The USPTO, which governs the issuance of patents, decided that it needed a more practical, predictable, and consistent method for its over 8,500 patent examiners to apply when determining whether a claim is patent-ineligible as an abstract idea.[3] Previously, the USPTO synthesized and organized, for its examiners to compare to an applicants claims, the facts and holdings of each Federal Circuit case that deals with section 101. However, the large and still-growing number of cases, and the confusion arising from similar subject matter [being] described both as abstract and not abstract in different cases,[4] led to issues. Accordingly, the USPTO issued its 2019 Revised Patent Subject Matter Eligibility Guidance on January 7, 2019 (2019 PEG), which shifted from the case-comparison structure to a new examination structure.[5] The new examination structure, described below, is more patent-applicant friendly than the prior structure,[6] thereby having the potential to result in a higher rate of patent issuances. The 2019 PEG does not alter the federal statutory law or case law that make up the U.S. patent system.

The 2019 PEG has a structure consisting of four parts: Step 1, Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to the statutory categories of patent-eligible subject matter, while Step 2 refers to the judicial exceptions. In Step 1, the Examiners must determine whether the subject matter of the claim is a process, machine, manufacture, or composition of matter. If it is, the Examiner moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether the claim recites a judicial exception including laws of nature, natural phenomenon, and abstract ideas. For abstract ideas, the Examiners must determine whether the claim falls into at least one of three enumerated categories: (1) mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations); (2) certain methods of organizing human activity (fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people); and (3) mental processes (concepts performed in the human mind: encompassing acts people can perform using their mind, or using pen and paper). These three enumerated categories are not mere examples, but are fully-encompassing. The Examiners are directed that [i]n the rare circumstance in which they believe[] a claim limitation that does not fall within the enumerated groupings of abstract ideas should nonetheless be treated as reciting an abstract idea, they are to follow a particular procedure involving providing justifications and getting approval from the Technology Center Director.

Next, if the claim limitation recites one of the enumerated categories of abstract ideas under Prong 1 of Step 2A, the Examiner is instructed to proceed to Prong 2 of Step 2A. In Step 2A, Prong 2, the Examiners are to determine if the claim is directed to the recited abstract idea. In this step, the claim does not fall within the exception, despite reciting the exception, if the exception is integrated into a practical application. The 2019 PEG provides a non-exhaustive list of examples for this, including, among others: (1) an improvement in the functioning of a computer; (2) a particular treatment for a disease or medical condition; and (3) an application of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.

Finally, even if the claim recites a judicial exception under Step 2A Prong 1, and the claim is directed to the judicial exception under Step 2A Prong 2, it might still be patent-eligible if it satisfies the requirement of Step 2B. In Step 2B, the Examiner must determine if there is an inventive concept: that the additional elements recited in the claims provide[] significantly more than the recited judicial exception. This step attempts to distinguish between whether the elements combined to the judicial exception (1) add[] a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field; or alternatively (2) simply append[] well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality. Furthermore, the 2019 PEG indicates that where an additional element was insignificant extra-solution activity, [the Examiner] should reevaluate that conclusion in Step 2B. If such reevaluation indicates that the element is unconventional . . . this finding may indicate that an inventive concept is present and that the claim is thus eligible.

In summary, the 2019 PEG provides an approach for the Examiners to apply, involving steps and prongs, to determine if a claim is patent-ineligible based on being an abstract idea. Conceptually, the 2019-PEG method begins with categorizing the type of claim involved (process, machine, etc.); proceeds to determining if an exception applies (e.g., abstract idea); then, if an exception applies, proceeds to determining if an exclusion applies (i.e., practical application or inventive concept). Interestingly, the PTAB not only applies the 2019 PEG in appeals from Examiner rejections, but also applies the 2019 PEG in its other Section-101 decisions, including CBM review and PGRs.[7] However, the 2019 PEG only applies to the Examiners and PTAB (the Examiners and the PTAB are both part of the USPTO), and does not apply to district courts or to the Federal Circuit.

Case 1: Appeal 2018-007443[8] (Decided October 10, 2019)

This case involves the PTAB reversing the Examiners Section 101 rejections of claims of the 14/815,940 patent application. This patent application relates to applying AI classification technologies and combinational logic to predict whether machines need to be serviced, and whether there is likely to be equipment failure in a system. The Examiner contended that the claims fit into the judicial exception of abstract idea because monitoring the operation of machines is a fundamental economic practice. The Examiner explained that the limitations in the claims that set forth the abstract idea are: a method for reading data; assessing data; presenting data; classifying data; collecting data; and tallying data. The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find monitoring the operation of machines, as recited in the instant application, is a fundamental economic principle (such as hedging, insurance, or mitigating risk). Rather, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning.

As explained in the previous section of this paper, the 2019 PEG set forth three possible categories of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Here, the PTAB addressed the second of these categories. The PTAB found that the claims do not recite a fundamental economic principle (one method of organizing human activity) because the claims recite AI components like neural networks in the context of monitoring machines. Clearly, economic principles and AI components are not always mutually exclusive concepts.[9] For example, there may be situations where these algorithms are applied directly to mitigating business risks. Accordingly, the PTAB was likely focusing on the distinction between monitoring machines and mitigating risk; and not solely on the recitation of the AI components. However, the recitation of the AI components did not seem to hurt.

Then, moving on to another category of abstract ideas, the PTAB stated:

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . . [Also,] claim 8 recites an output device that transforms the composite prediction output into human-readable form.

. . . .

In other words, the classifying steps of claims 1 and modules of claim 8 when read in light of the Specification, recite a method and system difficult and challenging for non-experts due to their computational complexity. As such, we find that one of ordinary skill in the art would not find it practical to perform the aforementioned classifying steps recited in claim 1 and function of the modules recited in claim 8 mentally.

In the language above, the PTAB addressed the third category of abstract ideas: mental processes. The PTAB provided that the claim does not recite a mental process because the AI algorithms, based on the context in which they are applied, are computationally complex.

The PTAB also addressed the first of the three categories of abstract ideas (mathematical concepts), and found that it does not apply because the specific mathematical algorithm or formula is not explicitly recited in the claims. Requiring that a mathematical concept be explicitly recited seems to be a narrow interpretation of the 2019 PEG. The 2019 PEG does not require that the recitation be explicit, and leaves the math category open to relationships, equations, or calculations. From this, the PTAB might have meant that the claims list a mathematical concept (the AI algorithm) by its name, as a component of the process, rather than trying to claim the steps of the algorithm itself. Clearly, the names of the algorithms are explicitly recited; the steps of the AI algorithms, however, are not recited in the claims.

Notably, reciting only the name of an algorithm, rather than reciting the steps of the algorithm, seems to indicate that the claims are not directed to the algorithms (i.e., the claims have a practical application for the algorithms). It indicates that the claims include an algorithm, but that there is more going on in the claim than just the algorithm. However, instead of determining that there is a practical application of the algorithms, or an inventive concept, the PTAB determined that the claim does not even recite the mathematical concepts.

Additionally, the PTAB found that even if the claims had been classified as reciting an abstract idea, as the Examiner had contended the claims are not directed to that abstract idea, but are integrated into a practical application. The PTAB stated:

Appellants claims address a problem specifically using several artificial intelligence classification technologies to monitor the operation of machines and to predict preventative maintenance needs and equipment failure.

The PTAB seems to say that because the claims solve a problem using the abstract idea, they are integrated into a practical application. The PTAB did not specify why the additional elements are sufficient to integrate the invention. The opinion actually does not even specifically mention that there are additional elements. Instead, the PTABs conclusion might have been that, based on a totality of the circumstances, it believed that the claims are not directed to the algorithms, but actually just apply the algorithms in a meaningful way. The PTAB could have fit this reasoning into the 2019 PEG structure through one of the Step 2A, Prong 2 examples (e.g., that the claim applies additional elements in some other meaningful way), but did not expressly do so.

Conclusion

This case illustrates:

(1) the monitoring of machines was held to not be an abstract idea, in this context; (2) the recitation of AI components such as neural networks in the claims did not seem to hurt for arguing any of the three categories of abstract ideas; (3) complexity of algorithms implemented can help with the mental processes category of abstract ideas; and (4) the PTAB might not always explicitly state how the rule for practical application applies, but seems to apply it consistently with the examples from the 2019 PEG.

The next four articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming rejections based on the mental processes category of abstract ideas, on an application for a probabilistic programming compiler that performs the seemingly 101-vulnerable function of generat[ing] data-parallel inference code.

Read more:
Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 - Lexology

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super famous works, lesser-known items, those privately held, and artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited dataset, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the dataset range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: Using Pytorch to Understand CNNS

The model also lacks information on the condition of the painting which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of Georgia OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this dataset and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

See the original post here:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In