Demystifying artificial intelligence What business leaders …

Artificial Intelligence still sounds more like science fiction than it does an IT investment, but it is increasingly real, and critical to the success of the Internet of Things.

In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.1 Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. Consider the following:

IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.2

Google has made major investments in AIin recent years, including acquiring eight robotics companies and a machine-learning company.3

Facebook hired AI luminary Yann LeCun to create an AIlaboratory with the goal of bringing major advances in the field.4

Amid all the hype, there is significant commercial activity underway in the area of AIthat is affecting or will likely soon affect organizations in every sector. Business leaders should understand what AIreally is and where it is heading.

The first steps in demystifying AIare defining the term, outlining its history, and describing some of the core technologies underlying it.

The field of AIsuffers from both too few and too many definitions. Nils Nilsson, one of the founding researchers in the field, has written that AI may lack an agreed-upon definition. . . .11 A well-respected AI textbook, now in its third edition, offers eight definitions, and declines to prefer one over the other.12 For us, a useful definition of AIis the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.13 Defining AI in terms of the tasks humans do, rather than how humans think, allows us to discuss its practical applications today, well before science arrives at a definitive understanding of the neurological mechanisms of intelligence.14 It is worth noting that the set of tasks that normally require human intelligence is subject to change as computer systems able to perform those tasks are invented and then widely diffused. Thus, the meaning of AI evolves over time, a phenomenon known as the AI effect, concisely stated as AI is whatever hasnt been done yet.15

AIis not a new idea. Indeed, the term itself dates from the 1950s. The history of the field is marked by periods of hype and high expectations alternating with periods of setback and disappointment, as a recent apt summation puts it.16 After articulating the bold goal of simulating human intelligence in the 1950s, researchers developed a range of demonstration programs through the 1960s and into the '70s that showed computers able to accomplish a number of tasks once thought to be solely the domain of human endeavor, such as proving theorems, solving calculus problems, responding to commands by planning and performing physical actionseven impersonating a psychotherapist and composing music. But simplistic algorithms, poor methods for handling uncertainty (a surprisingly ubiquitous fact of life), and limitations on computing power stymied attempts to tackle harder or more diverse problems. Amid disappointment with a lack of continued progress, AI fell out of fashion by the mid-1970s.

In the early 1980s, Japan launched a program to develop an advanced computer architecture that could advance the field of AI. Western anxiety about losing ground to Japan contributed to decisions to invest anew in AI. The 1980s saw the launch of commercial vendors of AI technology products, some of which had initial public offerings, such as Intellicorp, Symbolics,17 and Teknowledge.18 By the end of the 1980s, perhaps half of the Fortune 500 were developing or maintaining expert systems,an AI technology that models human expertise with a knowledge base of facts and rules.19High hopes for the potential of expert systems were eventually tempered as their limitations, including a glaring lack of common sense, the difficulty of capturing experts tacit knowledge, and the cost and complexity of building and maintaining large systems, became widely recognized. AI ran out of steam again.

In the 1990s, technical work on AI continued with a lower profile. Techniques such as neural networks and genetic algorithms received fresh attention, in part because they avoided some of the limitations of expert systems and partly because new algorithms made them more effective. The design of neural networks is inspired by the structure of the brain. Genetic algorithms aim to evolve solutions to problems by iteratively generating candidate solutions, culling the weakest, and introducing new solution variants by introducing random mutations.

By the late 2000s, a number of factors helped renew progress in AI, particularly in a few key technologies. We explain the factors most responsible for the recent progress below and then describe those technologies in more detail.

Moores Law. The relentless increase in computing power available at a given price and size, sometimes known as Moores Law after Intel cofounder Gordon Moore, has benefited all forms of computing, including the types AI researchers use. Advanced system designs that might have worked in principle were in practice off limits just a few years ago because they required computer power that was cost-prohibitive or just didnt exist. Today, the power necessary to implement these designs is readily available. A dramatic illustration: The current generation of microprocessors delivers 4 million times the performance of the first single-chip microprocessor introduced in 1971.20

Big data. Thanks in part to the Internet, social media, mobile devices, and low-cost sensors, the volume of data in the world is increasing rapidly.21 Growing understanding of the potential value of this data22 has led to the development of new techniques for managing and analyzing very large data sets.23 Big data has been a boon to the development of AI. The reason is that some AI techniques use statistical models for reasoning probabilistically about data such as images, text, or speech. These models can be improved, or trained, by exposing them to large sets of data, which are now more readily available than ever.24

The Internet and the cloud. Closely related to the big data phenomenon, the Internet and cloud computing can be credited with advances in AI for two reasons. First, they make available vast amounts of data and information to any Internet-connected computing device. This has helped propel work on AI approaches that require large data sets.25 Second, they have provided a way for humans to collaboratesometimes explicitly and at other times implicitlyin helping to train AI systems. For example, some researchers have used cloud-based crowdsourcing services like Mechanical Turk to enlist thousands of humans to describe digital images, enabling image classification algorithms to learn from these descriptions.26 Googles language translation project analyzes feedback and freely offerscontributions from its users to improve the quality of automated translation.27

New algorithms. An algorithm is a routine process for solving a program or performing a task. In recent years, new algorithms have been developed that dramatically improve the performance of machine learning, an important technology in its own right and an enabler of other technologies such as computer vision.28 (These technologies are described below.) The fact that machine learning algorithms are now available on an open-source basisis likely to foster further improvements as developers contribute enhancements to each others work.29

We distinguish between the field of AIand the technologies that emanate from the field. The popular press portrays AIas the advent of computers as smart asor smarter thanhumans. The individual technologies, by contrast, are getting better at performing specific tasks that only humans used to be able to do. We call these cognitive technologies (figure 1), and it is these that business and public sector leaders should focus their attention on. Below we describe some of the most important cognitive technologiesthose that are seeing wide adoption, making rapid progress, or receiving significant investment.

Computer vision refers to the ability of computers to identify objects, scenes, and activities in images. Computer vision technology uses sequences of imaging-processing operations and other techniques to decompose the task of analyzing images into manageable pieces. There are techniques for detecting the edges and textures of objects in an image, for instance. Classification techniques may be used to determine if the features identified in an image are likely to represent a kind of object already known to the system.30

Computer vision has diverse applications, including analyzing medical imaging to improve prediction, diagnosis, and treatment of diseases;31 face recognition, used by Facebook to automatically identify people in photographs32 and in security and surveillance to spot suspects;33 and in shoppingconsumers can now use smartphones to photograph products and be presented with options for purchasing them.34

Machine vision, a related discipline, generally refers to vision applications in industrial automation, where computers recognize objects such as manufactured parts in a highly constrained factory environmentrather simpler than the goals of computer vision, which seeks to operate in unconstrained environments. While computer vision is an area of ongoing computer science research, machine vision is a solved problemthe subject not of research but of systems engineering.35 Because the range of applications for computer vision is expanding, startup companies working in this area have attracted hundreds of millions of dollars in venture capital investment since 2011.36

Machine learning refers to the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions. For instance, presented with a database of information about credit card transactions, such as date, time, merchant, merchant location, price, and whether the transaction was legitimate or fraudulent, a machine learning system learns patterns that are predictive of fraud. The more transaction data it processes, the better its predictions are expected to become.

Applications of machine learning are very broad, with the potential to improve performance in nearly any activity that generates large amounts of data. Besides fraud screening, these include sales forecasting, inventory management, oil and gas exploration, and public health. Machine learning techniques often play a role in other cognitive technologies such as computer vision, which can train vision models on a large database of images to improve their ability to recognize classes of objects.37 Machine learning is one of the hottest areas in cognitive technologies today, having attracted around a billion dollars in venture capital investment between 2011 and mid-2014.38 Google is said to have invested some $400 million to acquire DeepMind, a machine learning company, in 2014.39

Natural language processing refers to the ability of computers to work with text the way humans do,for instance, extracting meaning from text or even generating text that is readable, stylistically natural, and grammatically correct. A natural language processing system doesnt understand text the way humans do, but it can manipulate text in sophisticated ways, such as automatically identifying all of the people and places mentioned in a document; identifying the main topic of a document; or extracting and tabulating the terms and conditions in a stack of human-readable contracts. None of these tasks is possible with traditional text processing software that operates on simple text matches and patterns. Consider a single hackneyed example that illustrates one of the challenges of natural language processing. The meaning of each word in the sentence Time flies like an arrow seems clear, until you encounter the sentence Fruit flies like a banana.Substituting fruit for time and banana for arrow changes the meaning of the words flies and like.40

Natural language processing, like computer vision, comprises multiple techniques that may be used together to achieve its goals. Language models are used to predict the probability distribution of language expressionsthe likelihood that a given string of characters or words is a valid part of a language, for instance. Feature selection may be used to identify the elements of a piece of text that may distinguish one kind of text from anothersay a spam email versus a legitimate one. Classification, powered by machine learning, would then operate on the extracted features to classify a message as spam or not.41

Because context is so important for understanding why time flies and fruit flies are so different, practical applications of natural language processing often address relative narrow domains such as analyzing customer feedback about a particular product or service,42 automating discovery in civil litigation or government investigations (e-discovery),43and automating writing of formulaic stories on topics such as corporate earnings or sports.44

Robotics, by integrating cognitive technologies such as computer vision and automated planning with tiny, high-performance sensors, actuators, and cleverly designed hardware, has given rise to a new generation of robots that can work alongside people and flexibly perform many different tasks in unpredictable environments.45 Examples include unmanned aerial vehicles,46 cobots that share jobs with humans on the factory floor,47 robotic vacuum cleaners,48and a slew of consumer products, from toys to home helpers.49

Speech recognition focuses on automatically and accurately transcribing human speech. The technology has to contend with some of the same challenges as natural language processing, in addition to the difficulties of coping with diverse accents, background noise, distinguishing between homophones (buy and by sound the same), and the need to work at the speed of natural speech. Speech recognition systems use some of the same techniques as natural language processing systems, plus others such as acoustic models that describe sounds and their probability of occurring in a given sequence in a given language.50 Applications include medical dictation, hands-free writing, voice control of computer systems, and telephone customer service applications. Dominos Pizza recently introduced a mobile app that allows customers to use natural speech to order, for instance.51

As noted, the cognitive technologies above are making rapid progress and attracting significant investment. Other cognitive technologies are relatively mature and can still be important components of enterprise software systems. These more mature cognitive technologies include optimization, which automates complex decisions and trade-offs about limited resources;52planning and scheduling, which entails devising a sequence of actions to meet goals and observe constraints;53 and rules-based systems, the technology underlying expert systems, which use databases of knowledge and rules to automate the process of making inferences about information.54

Organizations in every sector of the economy are already using cognitive technologies in diverse business functions.

In banking, automated fraud detection systems use machine learning to identify behavior patterns that could indicate fraudulent payment activity, speech recognition technology to automate customer service telephone interactions, and voice recognition technology to verify the identity of callers.55

In health care, automatic speech recognition for transcribing notes dictated by physicians is used in around half of UShospitals, and its use is growing rapidly.56 Computer vision systems automate the analysis of mammograms and other medical images.57 IBMs Watson uses natural language processing to read and understand a vast medical literature, hypothesis generation techniques to automate diagnosis, and machine learning to improve its accuracy.58

In life sciences, machine learning systems are being used to predict cause-and-effect relationships from biological data59 and the activities of compounds,60helping pharmaceutical companies identify promising drugs.61

In media and entertainment, a number of companies are using data analytics and natural language generation technology to automatically draft articles and other narrative material about data-focused topics such as corporate earnings or sports game summaries.62

Oil and gas producers use machine learning in a wide range of applications, from locating mineral deposits63 to diagnosing mechanical problems with drilling equipment.64

The public sector is adopting cognitive technologies for a variety of purposes including surveillance, compliance and fraud detection, and automation. The state of Georgia, for instance, employs a system combining automated handwriting recognition with crowdsourced human assistance to digitize financial disclosure and campaign contribution forms.65

Retailers use machine learning to automatically discover attractive cross-sell offers and effective promotions.66

Technology companies are using cognitive technologies such as computer vision and machine learning to enhance products or create entirely new product categories, such as the Roomba robotic vacuum cleaner67 or the Nest intelligent thermostat.68

As the examples above show, the potential business benefits of cognitive technologies are much broader than cost savings that may be implied by the term automation. They include:

The impact of cognitive technologies on business should grow significantly over the next five years. This is due to two factors. First, the performance of these technologies has improved substantially in recent years, and we can expect continuing R&D efforts to extend this progress. Second, billions of dollars have been invested to commercialize these technologies. Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy. While not all of these vendors will thrive, their activities should collectively drive the market forward. Together, improvements in performance and commercialization are expanding the range of applications for cognitive technologies and will likely continue to do so over the next several years (figure 2).

Examples of the strides made by cognitive technologies are easy to find. The accuracy of Googles voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later, according to one assessment.69 Computer vision has progressed rapidly as well. A standard benchmark used by computer vision researchers has shown a fourfold improvement in image classification accuracy from 2010 to 2014.70 Facebook reported in a peer-reviewed paper that its DeepFace technology can now recognize faces with 97 percent accuracy.71 IBM was able to double the precision of Watsons answers in the few years leading up to its famous Jeopardy! victory in 2011.72 The company now reports its technology is 2,400 percent smarter today than on the day of that triumph.73

As performance improves, the applicability of a technology broadens. For instance, when voice recognition systems required painstaking training and could only work well with controlled vocabularies, they found application in specialized areas such as medical dictation but did not gain wide adoption. Today, tens of millions of Web searches are performed by voice every month.74 Computer vision systems used to be confined to industrial automation applications but now, as weve seen, are used in surveillance, security, and numerous consumer applications. IBM is now seeking to apply Watson to a broad range of domains outside of game-playing, from medical diagnostics to research to financial advice to call center automation.75

Not all cognitive technologies are seeing such rapid improvement. Machine translation has progressed, but at a slower pace. One benchmark found a 13 percent improvement in the accuracy of Arabic to English translations between 2009 and 2012, for instance.76 Even if these technologies are imperfect, they can be good enough to have a big impact on the work organizations do. Professional translators regularly rely on machine translation, for instance, to improve their efficiency, automating routine translation tasks so they can focus on the challenging ones.77

From 2011 through May 2014, over $2 billion dollars in venture capital funds have flowed to companies building products and services based on cognitive technologies.78 During this same period, over 100 companies merged or were acquired, some by technology giants such as Amazon, Apple, IBM, Facebook, and Google.79 All of this investment has nurtured a diverse landscape of companies that are commercializing cognitive technologies.

This is not the place for providing a detailed analysis of the vendor landscape. Rather, we want to illustrate the diversity of offerings, since this is an indicator of dynamism that may help propel and develop the market. The following list of cognitive technology vendor categories, while neither exhaustive nor mutually exclusive, gives a sense of this.

Data management and analytical tools that employ cognitive technologies such as natural language processing and machine learning. These tools use natural language processing technology to help extract insights from unstructured text or machine learning to help analysts uncover insights from large datasets. Examples in this category include Context Relevant, Palantir Technologies, and Skytree.

Cognitive technology components that can be embedded into applications or business processes to add features or improve effectiveness. Wise.io, for instance, offers a set of modules that aim to improve processes such as customer support, marketing, and sales with machine-learning models that predict which customers are most likely to churn or which sales leads are most likely to convert to customers.80Nuance provides speech recognition technology that developers can use to speech-enable mobile applications.81

Point solutions. A sign of the maturation of some cognitive technologies is that they are increasingly embedded in solutions to specific business problems. These solutions are designed to work better than solutions in their existing categories and require little expertise in cognitive technologies. Popular application areas include advertising,82 marketing and sales automation,83 and forecasting and planning.84

Platforms. Platforms are intended to provide a foundation for building highly customized business solutions. They may offer a suite of capabilities including data management, tools for machine learning, natural language processing, knowledge representation and reasoning, and a framework for integrating these pieces with custom software. Some of the vendors mentioned above can serve as platforms of sorts. IBM is offering Watson as a cloud-based platform.85

If current trends in performance and commercialization continue, we can expect the applications of cognitive technologies to broaden and adoption to grow. The billions of investment dollars that have flowed to hundredsof companies building products based on machine learning, natural language processing, computer vision, or robotics suggests that many new applications are on their way to market. We also see ample opportunity for organizations to take advantage of cognitive technologies to automate business processes and enhance their products and services.86

Cognitive technologies will likely become pervasive in the years ahead. Technological progress and commercialization should expand the impact of cognitive technologies on organizations over the next three to five years and beyond. A growing number of organizations will likely find compelling uses for these technologies; leading organizations may find innovative applications that dramatically improve their performance or create new capabilities, enhancing their competitive position. IT organizations can start today, developing awareness of these technologies, evaluating opportunities to pilot them, and presenting leaders in their organizations with options for creating value with them. Senior business and public sector leaders should reflect on how cognitive technologies will affect their sector and their own organization and how these technologies can foster innovation and improve operating performance.

Read more on cognitive technologies in Cognitive technologies: The real opportunities for business."

Deloitte Consulting LLPs Enterprise Science offering employs data science, cognitive technologies such as machine learning, and advanced algorithms to create high-value solutions for clients. Services include cognitive automation, which uses cognitive technologies such as natural language processing to automate knowledge-intensive processes; cognitive engagement, which applies machine learning and advanced analytics to make customer interactions dramatically more personalized, relevant, and profitable; and cognitive insight, which employs data science and machine learning to detect critical patterns, make high-quality predictions, and support business performance. For more information about the Enterprise Science offering, contact Plamen Petrov (ppetrov@deloitte.com) or Rajeev Ronanki (rronanki@deloitte.com).

The authors would like to acknowledge the contributions ofMark Cotteleerof Deloitte Services LP;Plamen Petrov,Rajeev Ronanki, andDavid Steierof Deloitte Consulting LLP; andShankar Lakshman,Laveen Jethani, andDivya Ravichandranof Deloitte Support Services IndiaPvt Ltd.

Continued here:

Demystifying artificial intelligence What business leaders ...

Related Posts

Comments are closed.