12345...102030...


Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, task considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017[update], include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

AI research is divided into subfields[7] that focus on specific problems, approaches, the use of a particular tool, or towards satisfying particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[8]General intelligence is among the field’s long-term goals.[9] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[10] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[11] Some people also consider AI a danger to humanity if it progresses unabatedly.[12] Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 19871993 and the collapse of the Lisp machine market in 1987.

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[13] Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

While thought-capable artificial beings appeared as storytelling devices in antiquity,[14] the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[16]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[17][pageneeded] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[18] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[19] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[20] They and their students produced programs that the press described as “astonishing”: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established around the world.[24] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[25]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[27] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[29] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[30]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[13] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[31]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[33] By the mid 2010s, machine learning applications were used throughout the world.[34] In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][38] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[39] who at the time continuously held the world No. 1 ranking for two years[40][41]

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[42] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[42]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[8]

Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[43]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[44] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[45]

For difficult problems, algorithms can require enormous computational resourcesmost experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[46]

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[47] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability to guess.

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[48] The most general ontologies are called upper ontologies, which act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[49].

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[57]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[58] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Machine learning, a fundamental concept of AI research since the field’s inception,[61] is the study of computer algorithms that improve automatically through experience.[62][63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[66][67]

Natural language processing[70] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[71] and machine translation.[72]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[73] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[74] is the ability to analyze visual input. A few selected subproblems are speech recognition,[75]facial recognition and object recognition.[76]

The field of robotics[77] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[78] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[80]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[87][88] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[89] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate human-computer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[9][90] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[91][92]

Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, but all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[93] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[94] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[95] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[96] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[97] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[100] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[100] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[100] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[18] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[94] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[95]Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[28] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[96] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[31] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60+ years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[120]Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[121]Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[122]Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[78] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[123] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[124] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[125]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[126] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[127]

Logic[128] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[129] and inductive logic programming is a method for learning.[130]

Several different forms of logic are used in AI research. Propositional or sentential logic[131] is the logic of statements which can be true or false. First-order logic[132] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[133] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[134] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[135]situation calculus, event calculus and fluent calculus (for representing events and time);[136]causal calculus;[137] belief calculus;[138] and modal logics.[139]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[140]

Bayesian networks[141] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[142]learning (using the expectation-maximization algorithm),[143]planning (using decision networks)[144] and perception (using dynamic Bayesian networks).[145] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[145]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[146] and information value theory.[57] These tools include models such as Markov decision processes,[147] dynamic decision networks,[145]game theory and mechanism design.[148]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[149]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[150]kernel methods such as the support vector machine,[151]k-nearest neighbor algorithm,[152]Gaussian mixture model,[153]naive Bayes classifier,[154] and decision tree.[155] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[156]

The study of non-learning artificial neural networks[150] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[157] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[158]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[159][160] and was introduced to neural networks by Paul Werbos.[161][162][163]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[164]

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[165][166][167]

According to a survey,[168] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[169] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[170] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[171][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[172] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[174]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[175] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[176] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[167]

Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human Go player.[177]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[178] which are general computers and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence.[167] RNNs can be trained by gradient descent[179][180][181] but suffer from the vanishing gradient problem.[165][182] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[183]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[184] LSTM is often trained by Connectionist Temporal Classification (CTC).[185] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[186][187][188] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[189] Google also used LSTM to improve machine translation,[190] Language Modeling[191] and Multilingual Language Processing.[192] LSTM combined with CNNs also improved automatic image captioning[193] and a plethora of other applications.

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[194]

AI researchers have developed several specialized languages for AI research, including Lisp[195] and Prolog.[196]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[197]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[198]

For example, performance at draughts (i.e. checkers) is optimal,[199] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[200] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[204] and targeting online advertisements.[205][206]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[207] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[208]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[209] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[210] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[211]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[212]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[213]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.[214]

One main factor that influences the ability for a driver-less car to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[215] Some self-driving cars are not equipped with steering wheels or brakes, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[216]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services

Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[217] In August 2001, robots beat humans in a simulated financial trading competition.[218]

AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.[219]

Artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence.[220]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[222] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[223][224] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[225] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[224] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[226][224]

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.[227] They stated: “This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”[227] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[228][224]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[238]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.

View original post here:

Artificial intelligence – Wikipedia

WHAT IS ARTIFICIAL INTELLIGENCE?

Extinguished philosophies lie about the cradle of every science as the strangled snakes beside that of Hercules. – adapted from T. H. Huxley

John McCarthy

Computer Science Department

JanFebMarAprMayJun JulAugSepOctNovDec , :

Stanford University

Revised November 12, 2007:

This article for the layman answers basic questions about artificial intelligence. The opinions expressed here are not all consensus opinion among researchers in AI.

View post:

WHAT IS ARTIFICIAL INTELLIGENCE?

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written by Chris Makrozahopoulos

Follow this link:

A.I. Artificial Intelligence (2001) – IMDb

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

For the last half-decade, the most exciting, contentious, and downright awe-inspiring topic in technology has been artificial intelligence. Titans and geniuses have lauded AIs potential for change, glorifying its application in nearly every industry imaginable. Such praise, however, is also met with tantamount disapproval from similar influencers and self-made billionaires, not to mention a good part of Hollywoods recent sci-fi flicks. AI is a phenomenon that will never go down easy intelligence and consciousness are prerogatives of the living, and the inevitability of their existence in machines is hard to fathom, even with all those doomsday-scenario movies and books.

On that note, however, it is nonetheless a certainty we must come to accept, and most importantly, understand. Im here to discuss the implications of AI in two major areas: medicine and finance. Often regarded as the two pillars of any nations stable infrastructure, the industries are indispensable. The people that work in them, however, are far from irreplaceable, and its only a matter of time before automation makes its presence known.

Lets begin with perhaps the most revolutionary change the automated diagnosis and treatment of illnesses. A doctor is one of humanitys greatest professions. You heal others and are well compensated for your work. That being said, modern medicine and the healthcare infrastructure within which it is lies, has much room for improvement. IBMs artificial intelligence machine, Watson, is now as good as a professional radiologist when it comes to diagnosis, and its also been compiling billions of medical images (30 billion to be exact) to aid in specialized treatment for image heavy fields like pathology and dermatology.

Fields like cardiology are also being overhauled with the advent of artificial intelligence. It used to take doctors nearly an hour to quantify the amount of blood transported with each heart contraction, and it now takes only 15 seconds using the tools weve discussed. With these computers in major hospitals and clinics, doctors can process almost 260 million images a day in their respective fields this means finding skin cancers, blood clots, and infections all with unprecedented speed and accuracy, not to mention billions of dollars saved in research and maintenance.

Next up, the hustling and overtly traditional offices of Wall Street (until now). If you dont listen to me, at least recognize that almost 15,000 startups already exist that are working to actively disrupt finance. They are creating computer-generated trading and investment models that blow those crafted by the error-prone hubris of their human counterparts out of the water. Bridgewater Associated, one of the worlds largest hedge funds, is already cutting some of their staff in favor of AI-driven models, and enterprises like Sentient, Wealthfront, Two Sigma, and so many more have already made this transition. They shed the silk suits and comb-overs for scrappy engineers and piles of graphics cards and server racks. The result? Billions of dollars made with fewer people, greater certainty, and much more comfortable work attire.

So the real question to ask is where do we go from here? Stopping the development of these machines is pointless. They will come to exist, and they will undoubtedly do many of our jobs better than we can; the solution, however, is through regulation and a hard-nosed dose of checks and balances. 40% of U.S. jobs can be swallowed by artificial intelligence machines by the early 2030s, and if we arent careful about how we assign such professions, and the degree to which we automate them, we are looking at an incredibly serious domestic threat. Get very excited about what AI can do for us, and start thinking very deeply about how it can integrate with humans, lest utter anarchy.

Go here to read the rest:

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

Artificial intelligence now composing and producing pop music: WATCH – DJ Mag

Artificial intelligence (AI) has been used for years to correct and assist music creation; the time has now arrived that AI is composing and producing pop music nearly independently.

The single Break Free is the brainchild of YouTube-personality/singer/”neuroscience junkie” Taryn Southern and startup Amper Music. In addition, Southern has enlisted the support of other AI services to complete her forthcoming I Am AI full-length release.

Only capable of basic piano skills, Southern entrusted the Amper technology to develop harmonies, chords, and sequences. After giving the program some guidelines like tempo, key signature and preferred musicians, the program produced a track for Southern to consider.

In a funny way, I have a new song-writing partner who doesnt get tires and has this endless knowledge of music making, stated Southern to CNN Tech.

Southern did bring in the support of human producers when her vocals needed fine-tuned, supporting Amper CEO Drew Silversteins promise that human creators wont be going away any time soon.

Human creators and human musicians are not going away, reinforced Silverstein. Were making it so that you dont have to spend 10,000 hours and thousands of dollars buying equipment to share and express your ideas.

Watch the video to ‘Break Free” below. ‘I Am AI’ is expected out later this year.

Go here to see the original:

Artificial intelligence now composing and producing pop music: WATCH – DJ Mag

Versive raises $12.7M to solve security problems using artificial intelligence – GeekWire

Versive thinks its AI platform can help solve security problems. (Versive Photo)

If youre working on a security startup in 2017, youre more than likely applying artificial intelligence or machine learning techniques to automate threat detection and other time-consuming security tasks. After a few years as a financial services company, five-year-old Versive has joined that parade, and has raised $12.7 million in new funding to tackle corporate security.

Seattle-based Versive started life as Context Relevant, and has now raised $54.7 million in total funding, which is a lot for a company reorganizing itself around a new mission. Versive adopted its new name and new identity as a security-focused company in May, and its existing investors are giving it some more runway to make its AI-driven security approach work at scale.

The company enlisted legendary white-hat hacker and security expert Mudge Zatko, who is currently working for Stripe, to help it architect its approach toward using AI to solve security problems, said Justin Baker, senior director of marketing for Versive, based in downtown Seattle. What weve looking for are patterns of malicious behavior that can be used to help security professionals understand the true nature of threats on their networks, he said.

Chief information security officers (CISOs) are drowning in security alerts, and a lot of those alerts are bogus yet still take time to evaluate and dismiss, Baker said. Versives technology learns how potential customers are handling current and future threats and helps them figure out which alerts are worthy of a response, which saves time, money, and aggravation if working correctly.

The internet might be a dangerous neighborhood, but those CISOs are having trouble putting more cops on the beat: there is a staggering number of unfilled security jobs because companies are finding it very hard to recruit properly trained talent and retain stars once they figure it all out. Security technologies that make it easier to do the job with fewer people are extremely hot right now, and dozens of startups are working on products and services for this market.

Versive has around 60 employees at the moment, and plans to expand sales and marketing as it ramps up product development, Baker said. Investors include Goldman Sachs, Madrona Venture Group, Formation 8, Vulcan Capital, and Mark Leslie.

Read more:

Versive raises $12.7M to solve security problems using artificial intelligence – GeekWire

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, task considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017[update], include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

AI research is divided into subfields[7] that focus on specific problems, approaches, the use of a particular tool, or towards satisfying particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[8]General intelligence is among the field’s long-term goals.[9] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[10] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[11] Some people also consider AI a danger to humanity if it progresses unabatedly.[12] Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 19871993 and the collapse of the Lisp machine market in 1987.

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[13] Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

While thought-capable artificial beings appeared as storytelling devices in antiquity,[14] the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[16]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[17][pageneeded] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[18] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[19] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[20] They and their students produced programs that the press described as “astonishing”: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established around the world.[24] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[25]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[27] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[29] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[30]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[13] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[31]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[33] By the mid 2010s, machine learning applications were used throughout the world.[34] In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][38] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[39] who at the time continuously held the world No. 1 ranking for two years[40][41]

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[42] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[42]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[8]

Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[43]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[44] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[45]

For difficult problems, algorithms can require enormous computational resourcesmost experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[46]

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[47] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability to guess.

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[48] The most general ontologies are called upper ontologies, which act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[49].

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[57]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[58] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Machine learning, a fundamental concept of AI research since the field’s inception,[61] is the study of computer algorithms that improve automatically through experience.[62][63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[66][67]

Natural language processing[70] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[71] and machine translation.[72]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[73] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[74] is the ability to analyze visual input. A few selected subproblems are speech recognition,[75]facial recognition and object recognition.[76]

The field of robotics[77] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[78] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[80]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[87][88] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[89] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate human-computer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[9][90] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[91][92]

Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, but all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[93] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[94] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[95] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[96] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[97] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[100] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[100] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[100] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[18] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[94] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[95]Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[28] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[96] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[31] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60+ years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[120]Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[121]Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[122]Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[78] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[123] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[124] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[125]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[126] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[127]

Logic[128] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[129] and inductive logic programming is a method for learning.[130]

Several different forms of logic are used in AI research. Propositional or sentential logic[131] is the logic of statements which can be true or false. First-order logic[132] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[133] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[134] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[135]situation calculus, event calculus and fluent calculus (for representing events and time);[136]causal calculus;[137] belief calculus;[138] and modal logics.[139]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[140]

Bayesian networks[141] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[142]learning (using the expectation-maximization algorithm),[143]planning (using decision networks)[144] and perception (using dynamic Bayesian networks).[145] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[145]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[146] and information value theory.[57] These tools include models such as Markov decision processes,[147] dynamic decision networks,[145]game theory and mechanism design.[148]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[149]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[150]kernel methods such as the support vector machine,[151]k-nearest neighbor algorithm,[152]Gaussian mixture model,[153]naive Bayes classifier,[154] and decision tree.[155] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[156]

The study of non-learning artificial neural networks[150] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[157] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[158]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[159][160] and was introduced to neural networks by Paul Werbos.[161][162][163]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[164]

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[165][166][167]

According to a survey,[168] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[169] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[170] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[171][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[172] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[174]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[175] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[176] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[167]

Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human Go player.[177]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[178] which are general computers and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence.[167] RNNs can be trained by gradient descent[179][180][181] but suffer from the vanishing gradient problem.[165][182] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[183]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[184] LSTM is often trained by Connectionist Temporal Classification (CTC).[185] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[186][187][188] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[189] Google also used LSTM to improve machine translation,[190] Language Modeling[191] and Multilingual Language Processing.[192] LSTM combined with CNNs also improved automatic image captioning[193] and a plethora of other applications.

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[194]

AI researchers have developed several specialized languages for AI research, including Lisp[195] and Prolog.[196]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[197]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[198]

For example, performance at draughts (i.e. checkers) is optimal,[199] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[200] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[204] and targeting online advertisements.[205][206]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[207] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[208]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[209] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[210] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[211]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[212]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[213]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.[214]

One main factor that influences the ability for a driver-less car to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[215] Some self-driving cars are not equipped with steering wheels or brakes, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[216]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services

Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[217] In August 2001, robots beat humans in a simulated financial trading competition.[218]

AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.[219]

Artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence.[220]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[222] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[223][224] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[225] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[224] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[226][224]

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.[227] They stated: “This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”[227] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[228][224]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[238]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.

More here:

Artificial intelligence – Wikipedia

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written by Chris Makrozahopoulos

View original post here:

A.I. Artificial Intelligence (2001) – IMDb

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

For the last half-decade, the most exciting, contentious, and downright awe-inspiring topic in technology has been artificial intelligence. Titans and geniuses have lauded AIs potential for change, glorifying its application in nearly every industry imaginable. Such praise, however, is also met with tantamount disapproval from similar influencers and self-made billionaires, not to mention a good part of Hollywoods recent sci-fi flicks. AI is a phenomenon that will never go down easy intelligence and consciousness are prerogatives of the living, and the inevitability of their existence in machines is hard to fathom, even with all those doomsday-scenario movies and books.

On that note, however, it is nonetheless a certainty we must come to accept, and most importantly, understand. Im here to discuss the implications of AI in two major areas: medicine and finance. Often regarded as the two pillars of any nations stable infrastructure, the industries are indispensable. The people that work in them, however, are far from irreplaceable, and its only a matter of time before automation makes its presence known.

Lets begin with perhaps the most revolutionary change the automated diagnosis and treatment of illnesses. A doctor is one of humanitys greatest professions. You heal others and are well compensated for your work. That being said, modern medicine and the healthcare infrastructure within which it is lies, has much room for improvement. IBMs artificial intelligence machine, Watson, is now as good as a professional radiologist when it comes to diagnosis, and its also been compiling billions of medical images (30 billion to be exact) to aid in specialized treatment for image heavy fields like pathology and dermatology.

Fields like cardiology are also being overhauled with the advent of artificial intelligence. It used to take doctors nearly an hour to quantify the amount of blood transported with each heart contraction, and it now takes only 15 seconds using the tools weve discussed. With these computers in major hospitals and clinics, doctors can process almost 260 million images a day in their respective fields this means finding skin cancers, blood clots, and infections all with unprecedented speed and accuracy, not to mention billions of dollars saved in research and maintenance.

Next up, the hustling and overtly traditional offices of Wall Street (until now). If you dont listen to me, at least recognize that almost 15,000 startups already exist that are working to actively disrupt finance. They are creating computer-generated trading and investment models that blow those crafted by the error-prone hubris of their human counterparts out of the water. Bridgewater Associated, one of the worlds largest hedge funds, is already cutting some of their staff in favor of AI-driven models, and enterprises like Sentient, Wealthfront, Two Sigma, and so many more have already made this transition. They shed the silk suits and comb-overs for scrappy engineers and piles of graphics cards and server racks. The result? Billions of dollars made with fewer people, greater certainty, and much more comfortable work attire.

So the real question to ask is where do we go from here? Stopping the development of these machines is pointless. They will come to exist, and they will undoubtedly do many of our jobs better than we can; the solution, however, is through regulation and a hard-nosed dose of checks and balances. 40% of U.S. jobs can be swallowed by artificial intelligence machines by the early 2030s, and if we arent careful about how we assign such professions, and the degree to which we automate them, we are looking at an incredibly serious domestic threat. Get very excited about what AI can do for us, and start thinking very deeply about how it can integrate with humans, lest utter anarchy.

See more here:

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

These three countries are winning the global robot race – CNNMoney

The three countries are leading an artificial intelligence (AI) revolution, Malcolm Frank, head of strategy at leading outsourcing firm Cognizant, told CNNMoney in an interview.

Frank is the co-author of a recent book entitled “What to Do When Machines Do Everything,” on the impact artificial intelligence will have on the global economy in the coming years.

“I think it’s three horses in the race, and that’s probably the wrong metaphor because they are all going to win,” he said. “They are just going to win differently.”

While AI is progressing quickly elsewhere too, Frank said the other development hotspots are mainly city hubs such as London and Stockholm, or far smaller economies such as Estonia.

“The big three [are] India, China and the U.S,” he said.

Here’s why:

America

Silicon Valley giants such as Facebook (FB, Tech30), Amazon (AMZN, Tech30), Google (GOOGL, Tech30) and Tesla (TSLA) are already investing billions in harnessing the power of computers to replace several human tasks.

Computers are already beginning to substitute for people in sectors such as agriculture and even medicine, not to mention the race to get driverless cars on the road.

“With Silicon Valley, and the vendors and momentum that exists there… that’s going to continue,” Frank said.

China

The world’s second largest economy is also betting big on artificial intelligence.

Tech companies including Tencent (TCEHY) and Baidu (BIDU, Tech30) are competing with Silicon Valley to develop new uses for AI, and tech billionaire Jack Ma of Alibaba (BABA, Tech30), one of China’s richest men, has even said CEOs may eventually be obsolete.

Unlike in the U.S., however, the biggest push towards this new world in China is coming from the government.

“You look at the playbook China has had very successfully, with state sponsorship around developing the [physical] infrastructure of the country,” Frank said. “They’re taking a very similar approach around artificial intelligence, and I think that’s going to yield a lot of benefit.”

The Chinese government has already laid out an ambitious plan for a $150 billion AI industry, saying last month that it wants China to become the world’s “innovation center for AI” by 2030.

India

In India, the main shift towards artificial intelligence is coming from companies that make up its $143 billion outsourcing industry — a sector that employs nearly 4 million people.

Top firms like Infosys (INFY), Tata Consultancy Services and Wipro (WIT), which provide technology services to big names including Deutsche Bank (DB), Lockheed Martin (LMT), IBM (IBM, Tech30), Microsoft (MSFT, Tech30) and the U.S. Army, are increasingly relying on automation in their operations.

“In India, you look at this remarkable platform that is in place now… of incredibly sophisticated skills that are focused on the needs of [global] companies,” said Frank.

In addition, India’s startup scene also makes him “very optimistic” about the future of artificial intelligence there.

Cognizant (CTSH), which is based in the U.S. but has most of its workforce in India, is also making ever greater use of AI — from online bots managing clients’ finances to helping create automated systems for smart devices.

Should we be worried?

Many are worried about the potential pitfalls of artificial intelligence, including Tesla’s billionaire founder Elon Musk. He has warned that the technology could pose “an existential threat” if not used properly, and published a letter this week with over 100 other industry experts demanding a global ban on using it to make weapons.

Frank said that the development of artificial intelligence requires careful thought, by governments and companies working together to establish ground rules. The tech executive compared it to safety regulations for air travel and for cars, which have evolved several times over the years.

The focus needs to be on creating a world “where AI is going to be safe and you get the benefits of it without the downsides,” he said.

As for the other pervasive fear — that more robots will lead to job losses — Frank argues that AI will not only create more and different kinds of jobs in the future, but also enhance many of the existing ones.

“That’s what happened with assembly lines, that’s what happened with the steam engine, that’s what we think is going to happen with artificial intelligence.”

CNNMoney (New Delhi) First published August 21, 2017: 10:14 AM ET

Link:

These three countries are winning the global robot race – CNNMoney

The Artificial Narrative Of Artificial Intelligence – Above the Law

As the legal community flees Las Vegas, leaving another successful ILTACON and several hundred thousand dollars in bad decisions in their wake, two questions weigh upon my mind. Is there something broken about the way we talk about artificial intelligence, and why does the airport give a goddamn about my mixers?

Artificial intelligence is a sufficiently ominous sounding invention. It gets the Asimov-obsessed firm stakeholders all hot and bothered in a way that predictive coding never really could. But ultimately, discussions of artificial intelligence in the law break down to one of two flavors: vendors willing to tell you frankly that this technology requires carefully constructed processes, vigilant management, and meticulous attention to detail; and those who tell you its MAGIC! Seriously, just buy yourself some AI and your firm is set! Somehow, after years and years of AI talk in the legal profession, there are still people peddling the latter option. Havent we all figured out what AI really is by now? Are there still clients out there falling for robotic nerve tonic?

Speaking of tonic, I ask the bartender for a vodka soda no use wasting the last minutes in this desert monument to excess sober. She tells me she cant serve those until 10:30. Is it really morning?

Its no secret that, for the sake of laughs, well always compare AI to the Terminator movies. A cold, unfeeling strand of code ruthlessly burying associates. But ditch the glossy ad campaign and, in reality, these products arent going to master a 100TB document review by osmosis. No, much like the T-800 these robots show up on the job naked and need to beat your biker bar full of associates to death before it can do its job properly.

Sure itll learn from your first-pass reviewers but what will it learn? Will it pick up all their bad habits? Will it learn the systemic oversight your client never passed along? Most importantly, will it learn to forget all these mistakes as soon as you uncover them or will vestigial f**k-ups keep infecting the process months after they get caught? AI may be brilliant, but if the processes that set it down its path lack detailed consistency, its going to end up throwing your firm out an airlock. Like the surgeon with a scalpel, lawyers who fail to understand that the profession is mastering the tool itself, will just chain themselves to expensive trinkets that do the client more harm than good.

When did a vodka soda become verboten this early in the morning at the Las Vegas Airport? Look, I get that some states have Blue laws, but generally Vegas isnt puritanical about the gross consumption of liquor. Whats the deal with booze? She tells me before 10:30 she can only make Bloody Marys and Screwdrivers. Wait, so vodka is on the menu? Because these arent premixed drinks.

This is all so confusing. Does Vegas really care about my mixers? Has Big Orange spread its tentacles from the Tropicana deep into this McCarran bar?

Not that there arent still some musing about the fully automated lawyer a cognitive map of a present-day rainmaker that firms can license out to clients who want to plug the BoiesBot 3500 on their latest matter. Its not that the technology required to perfect this strategy is far off though it might be but raise your hand if you imagine a bar association will ever sign off on disrupting the profession like that. Theyre scared enough about raising bar cut-off scores to allow a handful more humans into the market. A practicing attorney firms can duplicate at zero marginal cost? Not likely to pass that muster any century soon.

Strong AI solutions are the future hell, strong AI solutions are the present but before you invest in anything, take measure of how the vendor sees its own product. The best are always a little leery of the phrase artificial intelligence. Theres more enthusiasm for machine learning and other synonyms that dont carry the same baggage as AI. The key is looking for someone who can admit that their products power is all about your commitment to it as a client and how hard youll work to make it give its peak performance.

The guy next to me, a cybersecurity expert who Id say modeled his whole ethos upon The Dude if I didnt know he rocked that look long before Jeff Bridges, runs afoul of the same libation limitations when he asks for some champagne. She can only offer him a mimosa. Goddamned orange farmers hit us again! Thats when something special happens. He tells the bartender to give him a mimosa, but put the orange juice on the side so he can control the mix. And thats how he got a glass of champagne.

Cybersecurity Dude hacked the bar AI!

Because anything as a service is only as powerful as its instructions. He recognized the flaw in the establishments process an instance of bad tagging that let the bartender miss something critical. Thats how he found the key item the bartenders rules missed.

And thats how I, eventually, got my vodka soda.

Screw you, Tropicana.

See more here:

The Artificial Narrative Of Artificial Intelligence – Above the Law

We Must Stop The Artificial Intelligence Arms Race At Any Cost – Huffington Post Canada

My visit to Japan has coincided with the 72nd anniversary of Hiroshima and Nagasaki nuclear bombings. On August 6, 1945, the nuclear bomb dropped by the Enola Gay Boeing B-29 exploded, killing an estimated 140,000 people. Three days later, the U.S. dropped the second bomb by the Bockscar B-29 on Nagasaki, killing an estimated 75,000. Within weeks, Japan surrendered. On the occasion of the 72nd anniversary ceremony about 50,000 people, including representatives from 80 nations, gathered at Hiroshima Peace Memorial Park. During the occasion, Japanese Prime Minister Shinzo Abe called for global cooperation to end nuclear weapons.

Even today, there are victims who are still suffering from the bombings. During my conversations with my Japanese friends, one thing was clear to me: that they all have at least someone linked to their family who was a victim of the bombing. Their stories speak to us. They ask us to introspect about what the world might become.

While viewing the picturesque terrain of Japan during a train journey from Tokyo to Kyoto, I was trying to find an answer to a question: At the end of the day, what did nuclear science achieve? Nuclear science was supposed to bring unlimited supply of energy to the power-starved countries of the world.

Nuclear bombs were not what Albert Einstein had in mind when he published the special theory of relativity. However, the bombs killed or wounded around 200,000 Japanese men, women and children. Our trust in the peaceful nuclear program has endangered humanity. The United States and Russia held over 70,000 nuclear weapons at the peak of the nuclear arms race, which could have killed every human being on the planet.

Recent advances in science and technology have made nuclear bombs more powerful than ever, and one can imagine how devastating it could be to the world. These advances in science and technology have also created many unprecedented and still unresolved global security challenges for policy makers and the public.

It is hard to imagine any one technology that will transform the global security more than artificial intelligence (AI), and it is going to have the biggest impact on humanity that has ever been. The Global Risks Report 2017 by the World Economic Forum places AI as one of the top five factors exacerbating geopolitical risks. One sector that saw the huge disruptive potential of AI from an early stage is the military. AI-based weaponization will represent a paradigm shift in the way wars are fought, with profound consequences for global security.

Major investment in AI-based weapons has already begun. According to a WEF report, a terrifying AI arms race may already be underway. To ensure a continued military edge over China and Russia, the Pentagon requested around US$15 billion for AI-based weaponry for the 2017 budget. However, the U.S. doesn’t have the exclusive control over AI.

Whichever country develops viable AI weaponry first will completely take over the military landscape as AI-based machines have the capacity to be much more intense and devastating than a nuclear bomb. If any one country has a machine that can hack into enemy defence systems, that country will have such a distinct advantage over any other world government.

Without proper regulation, AI-based weapons could go out of control and they may be used indiscriminately, create a greater risk to civilians, and more easily fall into the hands of dictators and terrorists. Imagine if North Korea developed an AI capable of military action that could very quickly destabilize the entire world. According to an UNOG report, two major concerns of AI based weapons are: (i) the inability to discriminate between combatants and non-combatants and (ii) the inability to ensure a proportionate response in which the military advantage will outweigh civilian casualties

My visit to Japan is also marked by concerns in the region about the possibility of nuclear missile strikes, particularly after U.S. President Donald Trump and North Korean leader Kim Jong-un threatened each other with shows of force. As Elon Musk said, “If you’re not concerned about AI safety, you should be. [There is] vastly more risk than North Korea.”

AI technology is growing in a similar fashion as the push for nuclear technology. I don’t know if there is a reasonable analogy between the nuclear research and AI research. Nuclear research was supposed to bring an unlimited supply of energy to the power-starved countries of the world. However, it was also harnessed for nuclear weapons.

A similar push is now been given to AI technology as well. AI might have great potential to help humanity in profound ways; however, it’s very important to regulate it. Starting an AI arms race is very bad for the world, and should be prevented by banning all AI-based weapons beyond meaningful human control.

In 2016, Prime Minister Justin Trudeau announced the government’s Pan-Canadian AI strategy, which aims to put Canada at the center of an emerging gold rush of innovation. So, what does this actually mean for the AI arms race that is well underway?

We are living in an age of revolutionary changes brought about by the advance of AI technology. I am not sure there lies any hope for the world, but certainly there is a danger of sudden death. I think we are on a brink of an AI arms race. It should be prevented at any cost. No matter how long and how difficult the road will be, it is the responsibility of all leaders who live in the present to continue to make efforts.

You can follow Pete Poovanna on Twitter: @poovannact and for more information check out http://www.pthimmai.com/

Also on HuffPost:

View original post here:

We Must Stop The Artificial Intelligence Arms Race At Any Cost – Huffington Post Canada

Artificial intelligence expert Andrew Ng hates paranoid androids, and other fun facts – The Mercury News

Get tech news in your inbox weekday mornings. Sign up for the free Good Morning Silicon Valley newsletter. BY RYAN NAKASHIMA

PALO ALTO What does artificial intelligence researcher Andrew Ng have in common with a very depressed robot from The Hitchhikers Guide to the Galaxy? Both have huge brains.

HE NAMED GOOGLE BRAIN

Googles deep-learning unit was originally called Project Marvin a possible reference to a morose and paranoid android with a brain the size of a planet from The Hitchhikers Guide to the Galaxy. Ng didnt like the association with this very depressed robot, he says, so he cut to the chase and changed the name to Google Brain.

A SMALL WEDDING

Ng met his roboticist wife, Carol Reiley, at a robotics conference in Kobe, Japan. They married in 2014 in Carmel, California, in a small ceremony. Ng says Reiley wanted to save money in order to invest in their future they even got their wedding bands made on a 3-D printer. And instead of a big ceremony, she put $50,000 in Drive.ai, the autonomous driving company she co-founded and leads as president. In its last funding round, the company raised $50 million.

GUESSING GAMES, COMPUTER VERSION

One of Ngs first computer programs tried to guess a number the user was thinking of. Based simply on the responses higher or lower, the computer could guess correctly after no more than seven questions.

GUESSING GAME, ACCENT VERSION

Americans tend to think I sound slightly British and the Brits think I sound horribly American, Ng says. According to my mother, I just mumble a lot.

HE LIKES BLUE SHIRTS

He buys blue button-down shirts 10 at a time from Nordstroms online. I just dont want to think about it every morning. Theres enough things that I need to decide on every day.

Original post:

Artificial intelligence expert Andrew Ng hates paranoid androids, and other fun facts – The Mercury News

How do you bring artificial intelligence from the cloud to the edge? – TNW

Despite the enormous speed at processing reams of data and providing valuable output, artificial intelligence applications have one key weakness: Their brains are located at thousands of miles away.

Most AI algorithms need huge amounts of data and computing power to accomplish tasks. For this reason, they rely on cloud servers to perform their computations, and arent capable of accomplishing much at the edge, the mobile phones, computers and other devices where the applications that use them run.

In contrast, we humans perform most of our computation and decision-making at the edge (in our brain) and only refer to other sources (internet, library, other people) where our own processing power and memory wont suffice.

This limitation makes current AI algorithms useless or inefficient in settings where connectivity is sparse or non-present, and where operations need to be performed in a time-critical fashion. However, scientists and tech companies are exploring concepts and technologies that will bring artificial intelligence closer to the edge.

A lot of the worlds computing power goes to waste as thousands and millions of devices remain idle for a considerable amount of time. Being able to coordinate and combine these resources will enable us to make efficient use of computing power, cut down costs and create distributed servers that can process data and algorithms at the edge.

Distributed computing is not a new concept, but technologies like blockchain can take it to a new level. Blockchain and smart contracts enable multiple nodes to cooperate on tasks without the need for a centralized broker.

This is especially useful for Internet of Things (IoT), where latency, network congestion, signal collisions and geographical distances are some of the challenges we face when processing edge data in the cloud. Blockchain can help IoT devices share compute resources in real-time and execute algorithms without the need for a round-trip to the cloud.

Another benefit to using blockchain is the incentivization of resource sharing. Participating nodes can earn rewards for making their idle computing resources available to others.

A handful of companies have developed blockchain-based computing platforms. iEx.ec, a blockchain company that bills itself as the leader in decentralized high-performance computing (HPC), uses the Ethereum blockchain to create a market for computational resources, which can be used for various use cases, including distributed machine learning.

Golem is another platform that provides distributed computing on the blockchain, where applications (requestors) can rent compute cycles from providers. Among Golems use cases is training and executing machine learning algorithms. Golem also has a decentralized reputation system that allows nodes to rank their peers based on their performance on appointed tasks.

From landing drones to running AR apps and navigating driverless cars, there are many settings where the need to run real-time deep learning at the edge is essential. The delay caused by the round-trip to the cloud can yield disastrous or even fatal results. And in case of a network disruption, a total halt of operations is imaginable.

AI coprocessors, chips that can execute machine learning algorithms, can help alleviate this shortage of intelligence at the edge in the form of board integration or plug-and-play deep learning devices. The market is still new, but the results look promising.

Movidius, a hardware company acquired by Intel in 2016, has been dabbling in edge neural networks for a while, including developing obstacle navigation for drones and smart thermal vision cameras. Movidius Myriad 2 vision processing unit (VPU) can be integrated into circuit boards to provide low-power computer vision and image signaling capabilities on the edge.

More recently, the company announced its deep learning compute stick, a USB-3 dongle that can add machine learning capabilities to computers, Raspberry PIs and other computing devices. The stick can be used individually or in groups to add more power. This is ideal to power a number of AI applications that are independent of the cloud, such as smart security cameras, gesture controlled drones and industrial machine vision equipment.

Both Google and Microsoft have announced their own specialized AI processing units. However, for the moment, they dont plan to deploy them at the edge and are using them to power their cloud services. But as the market for edge AI grows and other players enter the space, you can expect them to make their hardware available to manufacturers.

Credit: Shutterstock

Currently, AI algorithms that perform tasks such as recognizing images require millions of labeled samples for training. A human child accomplishes the same with a fraction of the data. One of the possible paths for bringing machine learning and deep learning algorithms closer to the edge is to lower their data and computation requirements. And some companies are working to make it possible.

Last year Geometric Intelligence, an AI company that was renamed to Uber AI Labs after being acquired by the ride hailing company, introduced a machine learning software that is less data-hungry than the more prevalent AI algorithms. Though the company didnt reveal the details, performance charts show that XProp, as the algorithm is named, requires much less samples to perform image recognition tasks.

Gamalon, an AI startup backed by the Defense Advanced Research Projects Agency (DARPA), uses a technique called Bayesian Program Synthesis, which employs probabilistic programming to reduce the amount of data required to train algorithms.

In contrast to deep learning, where you have to train the system by showing it numerous examples, BPS learns with few examples and continually updates its understanding with additional data. This is much closer to the way the human brain works.

BPS also requires extensively less computing power. Instead of arrays of expensive GPUs, Gamalon can train its models on the same processors contained in an iPad, which makes it more feasible for the edge.

Edge AI will not be a replacement for the cloud, but it will complement it and create possibilities that were inconceivable before. Though nothing short of general artificial intelligence will be able to rival the human brain, edge computing will enable AI applications to function in ways that are much closer to the way humans do.

This post is part of our contributor series. The views expressed are the author’s own and not necessarily shared by TNW.

Read next: How to follow today’s eclipse, even if you live outside the US

Read more from the original source:

How do you bring artificial intelligence from the cloud to the edge? – TNW

I was worried about artificial intelligenceuntil it saved my life – Quartz

Earlier this month, tech moguls Elon Musk and Mark Zuckerberg debated the pros and cons of artificial intelligence from different corners of the internet. While SpaceXs CEO is more of an alarmist, insisting that we should approach AI with caution and that it poses a fundamental existential risk, Facebooks founder leans toward a more optimistic future, dismissing doomsday scenarios in favor of AI helping us build a brighter future.

I now agree with Zuckerbergs sunnier outlookbut I didnt used to.

Beginning my career as an engineer, I was interested in AI, but I was torn about whether advancements would go too far too fast. As a mother with three kids entering their teens, I was also worried that AI would disrupt the future of my childrens education, work, and daily life. But then something happened that forced me into the affirmative.

Imagine for a moment that you are a pathologist and your job is to scroll through 1,000 photos every 30 minutes, looking for one tiny outlier on a single photo. Youre racing the clock to find a microscopic needle in a massive data haystack.

Now, imagine that a womans life depends on it. Mine.

This is the nearly impossible task that pathologists are tasked with every day. Treating the 250,000 women in the US who will be diagnosed with breast cancer this year, each medical worker must analyze an immense amount of cell tissue to identify if their patients cancer has spread. Limited by time and resources, they often get it wrong; a recent study found that pathologists accurately detect tumors only 73.2% of the time.

In 2011 I found a lump in my breast. Both my family doctor and I were confident that it was a Fibroadenoma, a common noncancerous (benign) breast lump, but she recommended I get a mammogram to make sure. While the original lump was indeed a Fibroenoma, the mammogram uncovered two unknown spots. My journey into the unknown started here.

Since AI imaging was not available at the time, I had to rely solely on human analysis. The next four years were a blur of ultrasounds, biopsies, and surgeries. My well-intentioned network of doctors and specialists were not able to diagnose or treat what turned out to be a rare form of cancer, and repeatedly attempted to remove my recurring tumors through surgery.

After four more tumors, five more biopsies, and two more operations, I was heading toward a double mastectomy and terrified at the prospect of the cancer spreading to my lungs or brain.

I knew something needed to change. In 2015, I was introduced to a medical physicist that decided to take a different approach, using big data and a machine-learning algorithm to spot my tumors and treat my cancer with radiation therapy. While I was nervous about leaving my therapy up to this new technology, itcombined with the right medical knowledgewas able to stop the growth of my tumors. Im now two years cancer-free.

I was thankful for the AI that saved my life but then that very same algorithm changed my sons potential career path.

The positive impact of machine learning is often overshadowed by the doom-and-gloom of automation. Fearing for their own jobs and their childrens future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society.

After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. He met countless radiology technicians throughout my years of treatment and was excited to start his training off in a specialized program. However, during his application process, the program was cancelled: He was told it was because there were no longer enough jobs in the radiology industry to warrant the programs continuation. Many positions have been lost to automationjust like the technology and machine learning that helped me in my battle with cancer.

This was a difficult period for both my son and I: The very thing that had saved my life prevented him from following the path he planned. He had to rethink his education mid-application when it was too late to apply for anything else, and he was worried that his back up plans would fall through.

Hes now pursuing a future in biophysics rather than medical radiation, starting with an undergraduate degree in integrated sciences. In retrospect, we both now realize that the experience forced him to rethink his career and unexpectedly opened up his thinking about what research areas will be providing the most impact on peoples lives in the future.

Although some medical professionals will lose their jobs to AI, the life-saving benefits to patients will be magnificent. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways. For instance, Atomwise applies AI to fuel drug discovery, Deep Genomics uses machine learning to help pharmaceutical companies develop genetic medicines, and Analytics 4 Life leverages AI to better detect coronary artery disease.

While not all transitions from automated roles will be as easy as my sons pivot to a different scientific field, I believe that AI has the potential to shape our future careers in a positive way, even helping us find jobs that make us happier and more productive.

As this technology rapidly develops, the future is clear: AI will be an integral part of our lives and bring massive changes to our society. Its time to stop debating (looking at you, Musk and Zuckerberg) and start accepting AI for what it is: both the good and the bad.

Throughout the years, Ive found myself on both sides of the equation, arguing both for and against the advancement of AI. But its time to stop taking a selective view on AI, choosing to incorporate it into our lives only when convenient. We must create solutions that mitigate AIs negative impact and maximize its positive potential. Key stakeholdersgovernments, corporates, technologists, and moreneed to create policies, join forces, and dedicate themselves to this effort.

And were seeing great progress. AT&T recently began retraining thousands of employees to keep up with technology advances and Google recently dedicated millions of dollars to prepare people for an AI-dominated workforce. Im hopeful that these initiatives will allow us to focus on all the good that AI can do for our world and open our eyes to the potential lives it can save.

One day, yours just might depend on it, too.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Read the original post:

I was worried about artificial intelligenceuntil it saved my life – Quartz

America Can’t Afford to Lose the Artificial Intelligence War – The National Interest Online

Today, the question of artificial intelligence (AI) and its role in future warfare is becoming far more salient and dramatic than ever before. Rapid progress in driverless cars in the civilian economy has helped us all see what may become possible in the realm of conflict. All of a sudden, it seems, terminators are no longer the stuff of exotic and entertaining science-fiction movies, but a real possibility in the minds of some. Innovator Elon Musk warns that we need to start thinking about how to regulate AI before it destroys most human jobs and raises the risk of war.

It is good that we start to think this way. Policy schools need to start making AI a central part of their curriculums; ethicists and others need to debate the pros and cons of various hypothetical inventions before the hypothetical becomes real; military establishments need to develop innovation strategies that wrestle with the subject. However, we do not believe that AI can or should be stopped dead in its tracks now; for the next stage of progress, at least, the United States must rededicate itself to being the first in this field.

First, a bit of perspective. AI is of course not entirely new. Remotely piloted vehicles may not really qualifyafter all, they are humanly, if remotely, piloted. But cruise missiles already fly to an aimpoint and detonate their warheads automatically. So would nuclear warheads on ballistic missiles, if God forbid nuclear-tipped ICBMs or SLBMs were ever launched in combat. Semi-autonomous systems are already in use on the battlefield, like the U.S. Navy Phalanx Close-In Weapons System, which is capable of autonomously performing its own search, detect, evaluation, track, engage, and kill assessment functions, according to the official Defense Department description, along with various other fire-and-forget missile systems.

But what is coming are technologies that can learn on the jobnot simply follow prepared plans or detailed algorithms for detecting targets, but develop their own information and their own guidelines for action based on conditions they encounter that were not initially foreseeable in specific.

A case in point is what our colleague at Brookings, retired Gen. John Allen, calls hyperwar. He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the targets defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word robotics seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.

The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a targetbased on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictiveand fire upon that target without any human input.

To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.

To be sure, we need the debate about AIs longer-term future, and we need it now. But we also need the next generation of autonomous systemsand America has a strong interest in getting them first.

Michael O’Hanlon is a senior fellow at the Brookings Institution.Robert Karlen is a student at the University of Washington and an intern in the Center for Twenty-First Century Security and Intelligence at the Brookings Institution.

Image: Reuters

Read more:

America Can’t Afford to Lose the Artificial Intelligence War – The National Interest Online

Merging big data and AI is the next step – TNW

AI is one of hottest trends in tech at the moment, but what happens when its merged with another fashionable and extremely promising tech?

Researchers are looking for ways to take big data to the next level by combining it with AI. Weve just recently realized how powerful big data can be, and by uniting it with AI, big data is swiftly marching towards a level of maturity that promises a bigger, industry-wide disruption.

The application of artificial intelligence on big data is arguably the most important modern breakthrough of our time. It redefines how businesses create value with the help of data. The availability of big data has fostered unprecedented breakthroughs in machine learning, that could not have been possible before.

With access to large volumes of datasets, businesses are now able to derive meaningful learning and come up with amazing results. It is no wonder then why businesses are quickly moving from a hypothesis-based research approach to a more focused data first strategy.

Businesses can now process massive volumes of data which was not possible before due to technical limitations. Previously, they had to buy powerful and expensive hardware and software. The widespread availability of data is the most important paradigm shift that has fostered a culture of innovation in the industry.

The availability of massive datasets has corresponded with remarkable breakthroughs in machine learning, mainly due to the emergence of better, more sophisticated AI algorithms.

The best example of these breakthroughs is virtual agents. Virtual agents (more commonly known as chat bots), have gained impressive traction over the course of time. Previously, chatbots had trouble identifying certain phrases or regional accents, dialects or nuances.

In fact, most chatbots get stumped by the simplest of words and expressions, such as mistaking Queue for Q and so on. With the union of big data and AI however, we can see new breakthroughs in the way virtual agents can self-learn.

A good example of self-learning virtual agents is Amelia, a cognitive agent recently developed by IPSoft. Amelia can understand everyday language, learn really fast and even gets smarter with time!

She is deployed at the help desk of Nordic bank SEB along with a number of public sector agencies. The reaction of executive teams to Amelia has been overwhelmingly positive.

Google is also delving deeper into big data-powered AI learning. DeepMind, Googles very own artificial intelligence company, has developed an AI that can teach itself to walk, run, jump and climb without any prior guidance. The AI was never taught what walking or running is but managed to learn it itself through trial and error.

The implications of these breakthroughs in the realm of artificial intelligence are astounding and could provide the foundation for further innovations in the times to come. However, there are dire repercussions of self-learning algorithms too and, if werent too busy to notice, you may have observed quite a few in the past.

Not long ago, Microsoft introduced its own artificial intelligence chatbot named Tay. The bot was made available to the public for chatting and could learn through human interactions. However, Microsoft pulled the plug on the project only a day after the bot was introduced to Twitter.

Learning at an exponential level mainly through human interactions, Tay transformed from an innocent AI teen girl to an evil, Hitler-loving, incestuous, sex-promoting, Bush did 9/11-proclaiming robot in less than 24 hours.

Some fans of sci-fi movies like Terminator also voice concerns that with the access it has to big data, artificial intelligence may become self-aware and that it may initiate massive cyberattacks or even take over the world. More realistically speaking, it may replace human jobs.

Looking at the rate of AI-learning, we can understand why a lot of people around the world are concerned with self-learning AI and the access it enjoys to big data. Whatever the case, the prospects are both intriguing and terrifying.

There is no telling how the world will react to the amalgamation of big data and artificial intelligence. However, like everything else, it has its virtue and vices. For example, it is true that self-learning AI will herald a new age where chatbots become more efficient and sophisticated in answering user queries.

Perhaps we would eventually see AI bots on help desks in banks, waiting to greet us. And, through self-learning, the bot will have all the knowledge it could ever need to answer all our queries in a manner unlike any human assistant.

Whatever the applications, we can surely say that combining big data with artificial intelligence will herald an age of new possibilities and astounding new breakthroughs and innovations in technology. Lets just hope that the virtues of this union will outweigh the vices.

Read next: Military-funded prosthetic technologies benefit veterans, but also kids

Visit link:

Merging big data and AI is the next step – TNW

Artificial Intelligence Gets a Lot Smarter – Barron’s

Aug. 18, 2017 11:46 p.m. ET

Artificial Intelligence has become a hot topic based on the achievements of the biggest computing giants: Alphabet, GOOGL -0.1595412112196279% Alphabet Inc. Cl A U.S.: Nasdaq USD926.18 -1.48 -0.1595412112196279% /Date(1503090000160-0500)/ Volume (Delayed 15m) : 1299449 AFTER HOURS USD925.32 -0.86 -0.09285452071951457% Volume (Delayed 15m) : 59094 P/E Ratio 33.55445017263054 Market Cap 635639773709.218 Dividend Yield N/A Rev. per Employee 1373290 More quote details and news GOOGL in Your Value Your Change Short position Microsoft, MSFT 0.12430939226519337% Microsoft Corp. U.S.: Nasdaq USD72.49 0.09 0.12430939226519337% /Date(1503090000217-0500)/ Volume (Delayed 15m) : 18161839 AFTER HOURS USD72.41 -0.08 -0.11036004966202234% Volume (Delayed 15m) : 1127454 P/E Ratio 26.848148148148148 Market Cap 558335661300.138 Dividend Yield 2.1520209684094356% Rev. per Employee 720927 More quote details and news MSFT in Your Value Your Change Short position Facebook, FB 0.2995626385477203% Facebook Inc. Cl A U.S.: Nasdaq USD167.41 0.5 0.2995626385477203% /Date(1503090000280-0500)/ Volume (Delayed 15m) : 14595414 AFTER HOURS USD167.4 -0.01 -0.005973358819664297% Volume (Delayed 15m) : 667187 P/E Ratio 38.134396355353076 Market Cap 485879579466.552 Dividend Yield N/A Rev. per Employee 1945860 More quote details and news FB in Your Value Your Change Short position and others.

But technology keeps evolving and it is time for AI to take the next step in its evolution, becoming more accessible to businesses large and small.

That charge is being led by Veritone VERI -4.141291108404385% Veritone Inc. U.S.: Nasdaq USD7.87 -0.34 -4.141291108404385% /Date(1503090000338-0500)/ Volume (Delayed 15m) : 69971 AFTER HOURS USD7.87 % Volume (Delayed 15m) : 60 P/E Ratio N/A Market Cap 117451876.61087 Dividend Yield N/A Rev. per Employee 76554.1 More quote details and news VERI in Your Value Your Change Short position (ticker: VERI), which went public in May, and which claims to offer a platform that can be used to run AI tasks in a variety of areas, including understanding natural language and analyzing video, for clients from law firms to media companies.

Competing with Veritone is the 800-lb gorilla of cloud computing, Amazon.com AMZN -0.21862019425965834% Amazon.com Inc. U.S.: Nasdaq USD958.47 -2.1 -0.21862019425965834% /Date(1503090000209-0500)/ Volume (Delayed 15m) : 3210495 AFTER HOURS USD956.5 -1.97 -0.2055359061838138% Volume (Delayed 15m) : 103183 P/E Ratio 243.26649746192894 Market Cap 460429809206.396 Dividend Yield N/A Rev. per Employee 439731 More quote details and news AMZN in Your Value Your Change Short position (AMZN), whose Amazon Web Services, or AWS, already claims to be the largest outlet for AI. Amazon provides AI capability on top of the raft of other services it offers clients in the cloud.

Industry veterans smell the opportunity in this new wave of AI. Billionaire software pioneer Tom Siebel has a new venture that he says is doing some key work applying AI for clients in industries from energy to manufacturing to health care.

This vector of AI is unstoppable, Siebel tells Barrons.

WHEN IT FIRST EMERGED on the scene, AI was strictly the province of large companies such as Alphabets (GOOGL) Google, as this magazine reported in 2015 (Watch Out Intel, Here Comes Facebook, October 31, 2015). That is because AI has typically required both massive computing facilities and access to vast amounts of data, which only the largest companies could provide.

More and more, though, its becoming accessible to smaller companies. Amazons director of deep learning, Matt Wood, is an M.D. and Ph.D. who worked on the Human Genome Project and is now focused on the many customers doing various AI tasks on Amazons AWS, from furniture maker Herman Miller to Kelley Blue Book to the American Heart Association. In the last five years [AI] has gone from being a really academic preoccupation to addressing customer needs, he says.

Still, Veritone founder and CEO Chad Steelberg believes hes discerned a weakness in Amazon and Googles approach to AI that is an opportunity for his company.

ARTIFICIAL INTELLIGENCE TODAY means having a computer pore through reams of data looking for patterns. The machine on its own learns the rules of natural language grammar, say, or the difference between a picture of a cat on the internet versus that of a person or a car.

Steelberg, 46, says the problem is that no single algorithm used by a machine can be trained to sufficient accuracy. But train enough engines to work together and you can get high levels of accuracy, he says, noting that Veritone has spent time striking deals with researchers who each have an algorithm or a collection of algorithms, which he refers to as engines.

Google has one engine, he says, we have 30 of them just for natural language processing, and 77 overall if you count engines it has acquired to work on other types of machine-learning problems. His company is attempting to acquire more, across various domains of expertise.

Although no one Veritone client necessarily has massive amounts of data on its own, Steelberg expects to make up for that by pooling insights across his companys customer base.

For all the promise, Veritones shares are down a whopping 40% since their debut.

Steelberg, who started programming in the fourth grade, sold his last company, radio advertising outfit dMarc, to Google for $1.24 billion in 2006. Hes undaunted by the lackluster reception. The biggest challenge we have, he says, is that we are three years ahead of the average investor in terms of understanding the opportunity.

OLD HANDS IN SOFTWARE are just as enthusiastic. Siebel, 64, who sold his Siebel Systems to Oracle for nearly $6 billion in 2006, is solving another aspect of the AI problem: getting access to data.

His new company, C3 IoT, which is still private, is doing work for one of the largest electric utilities in the world, Italys Enel (ENEL.Italy), observing patterns in tens of millions of electric meters throughout Europe. C3 IoT hopes to help Enel discover how much its electricity is being used versus how much is being paid forto ferret out fraud.

There are numerous uses of machine learning across many industries. C3 is also working with Deere DE -5.379899983868365% Deere & Co. U.S.: NYSE USD117.31 -6.67 -5.379899983868365% /Date(1503090051752-0500)/ Volume (Delayed 15m) : 11342389 AFTER HOURS USD117.12 -0.19 -0.16196402693717502% Volume (Delayed 15m) : 22239 P/E Ratio 19.649916247906198 Market Cap 37523010381.8066 Dividend Yield 2.0458613928906315% Rev. per Employee 494044 More quote details and news DE in Your Value Your Change Short position (DE) to reduce inventory. The heavy-equipment maker, for instance, keeps some $4 billion worth of parts on hand for use in manufacturing. Using AI, its possible to get a better sense of how much is practically needed and reduce Deeres working capital costs.

The big picture to Siebel is that more sources of data are becoming available because sensors with network connections are being attached to every part of the electrical grid, and to other forms of infrastructure. These sensors are popularly known as the Internet of Things, a kind of second Internet that connects machines rather than people on their computers and smartphones. The Enel project is the largest such IoT project in the world, claims Siebel, connecting 42 million sensors.

In the future, he says, All problems will be problems of IoT. And the winners will be the companies that have enough sensors in place to generate the data needed to solve complex business problems.

TIERNAN RAY can be reached at: tiernan.ray@barrons.com, http://www.blogs.barrons.com/techtraderdaily or @barronstechblog

Like Barrons on Facebook

Follow Barrons on Twitter

See the rest here:

Artificial Intelligence Gets a Lot Smarter – Barron’s

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology

Government is always being asked to do more with less less money, less staff, just all around less and that makes the idea of artificial intelligence (AI)a pretty attractive row to hoe. If a piece of technology could reduce staff workload or walk citizens through a routine process or form, you could effectively multiply a workforce without ever actually adding new people.

But for every good idea, there are caveats, limitations, pitfalls and the desire to push the envelope. While innovating anything in tech is generally a good thing, when it comes to AI in government, there is fine line to walk between improving a process and potentially making it more convoluted.

Outside of a few key government functions, a new white paper from the Harvard Ash Center for Democratic Governance and Innovation finds that AI could actually increase the burden of government and muddy-up the functions it is so desperately trying to improve.

Hila Mehr, a Center for Technology and Democracy fellow, explained that there are five key government problems that AI might be able to assist with reasonably: resource allocation, large data sets, expert shortages, predictable scenarios, and procedural and diverse data.

And governments have already started moving into these areas. In Arkansas and North Carolina, chatbots are helping the state connect with its citizens through Facebook. In Utah and Mississippi, Amazon Alexa skills have been introduced to better connect constituents to the information and services they need.

Unlike Hollywood representations of AI in film, Mehr said, the real applications for artificial intelligence in a government organization are generally far from sexy. The administrative aspects of governing are where tools like this will excel.

Where it comes to things like expert shortages, she said she sees AI as a means to support existing staff. In a situation where doctors are struggling to meet the needs of all of their patients, AI could act as a research tool. The same is true of lawyers dealing with thousands of pages of case law. AI could be used as a research assistant.

If youre talking about government offices that are limited in staff and experts,” Mehr said, “thats where AI trained on niche issues could come in.

But, she warned, AI is not without its problems, namely making sure that it is not furthering human bias written in during the programming process and played out through the data it is fed. Rather than rely on AI to make critical decisions, she argues that any algorithms and decisions made for or as a result of AI should retain a human component.

We cant rely on them to make decisions, so we need that check, the way we have checks in our democracy, we need to have checks on these systems as well, and thats where the human group or panel of individuals comes in, Mehr said. The way that these systems are trained, you cant always know why they are making the decision they are making, which is why its important to not let that be the final decision because it can be a black box depending on how it is trained and you want to make sure that it is not running on its own.

But past the fear that the technology might disproportionately impact certain citizens or might somehow complicate the larger process, there is the somewhat legitimate fear that the implementation of AI will mean lost jobs. Mehr said its a thought that even she has had.

On the employee side, I think a lot of people view this, rightly so, as something that could replace them,” she added. “I worry about that in my own career, but I know that it is even worse for people who might have administrative roles. But I think early studies have shown that youre using AI to help people in their work so that they are spending less time doing repetitive tasks and more time doing the actual work that requires a human touch.

In both her white paper and on the phone, Mehr is careful to advise against going whole hog into AI with the expectation that it can replace costly personnel. Instead she advocates for the technology as a tool to build and supplement the team that already exists.

As for where the technology could run affront of human jobs, Mehr advises that government organizations and businesses alike start considering labor practices in advance.

Inevitably, it will replace some jobs, she said. People need to be looking at fair labor practices now, so that they can anticipate these changes to the market and be prepared for them.

With any blossoming technology, there are barriers to entry and hurdles that must be overcome before a useful tool is in the hands of those best fit to use it. And as with anything, money and resources present a significant challenge but Mehr said large amounts of data are also needed to get AI, especially learning systems, off the ground successfully.

If you are talking about simple automation or [answering] a basic set of questions, it shouldnt take that long. If you are talking about really training an AI system with machine learning, you need a big data set, a very big data set, and you need to train it, not just feed the system data and then its ready to go, she said. The biggest barriers are time and resources, both in the sense of data and trained individuals to do that work.

Link:

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology

Artificial intelligence is coming to medicine don’t be afraid – STAT

A

utomation could replace one-third of U.S. jobs within 15 years. Oxford and Yale experts recently predicted that artificial intelligence could outperform humans in a variety of tasks by 2045, ranging from writing novels to performing surgery and driving vehicles. A little human rage would be a natural response to such unsettling news.

Artificial intelligence (AI) is bringing us to the precipice of an enormous societal shift. We are collectively worrying about what it will mean for people. As a doctor, Im naturally drawn to thinking about AIs impact on the practice of medicine. Ive decided to welcome the coming revolution, believing that it offers a wonderful opportunity for increases in productivity that will transform health care to benefit everyone.

Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Googles AlphaGo AI over the human Go champ. What does that mean for medicine?

advertisement

To date, most AI solutions have solved minor human issues playing a game or helping order a box of detergent. The innovations need to matter more. The true breakthroughs and potential of AI lie in real advancements in human productivity. A McKinsey Global Institute report suggests that AI is helping us approach an unparalleled expansion in productivity that will yield five times the increase introduced by the steam engine and about 1 1/2 times the improvements weve seen from robotics and computers combined. We simply dont have a mental model to comprehend the potential of AI.

Across all industries, an estimated 60 percent of jobs will have 30 percent of their activities automated; about 5 percent of jobs will be 100 percent automated.

What this means for health care is murky right now. Does that 5 percent include doctors? After all, medicine is a series of data points of a knowable nature with clear treatment pathways that could be automated. That premise, though, fantastically overstates and misjudges the capabilities of AI and dangerously oversimplifies the complexity underpinning what physicians do. Realistically, AI will perform many discrete tasks better than humans can which, in turn, will free physicians to focus on accomplishing higher-order tasks.

If you break down the patient-physician interaction, its complexity is immediately obvious. Requirements include empathy, information management, application of expertise in a given context, negotiation with multiple stakeholders, and unpredictable physical response (think of surgery), often with a life on the line. These are not AI-applicable functions.

I mentioned AlphaGo AI beating human experts at the game. The reason this feat was so impressive is due to the high branching factor and complexity of the Go game tree there are an estimated 250 choices per move, permitting estimates of 10 to the 170th different game outcomes. By comparison, chess has a branching factor of 35, with 10 to the 47th different possible game outcomes. Medicine, with its infinite number of moves and outcomes, is decades away from medical approaches safely managed by machines alone.

We still need the human factor.

That said, more than 20 percent of a physicians time is now spent entering data. Since doctors are increasingly overburdened with clerical tasks like electronic health record entry, prior authorizations, and claims management, they have less time to practice medicine, do research, master new technology, and improve their skills. We need a radical enhancement in productivity just to sustain our current health standards, much less move forward. Thoughtfully combining human expertise and automated functionality creates an augmented physician model that scales and advances the expertise of the doctor.

Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) service authorizations, or tapping away behind a computer. The clerical burdens pushed by fickle health care systems onto physicians and other care providers is both unsustainable and a waste of our best and brightest minds. Its the equivalent of asking an airline pilot to manage the ticket counter, count the passengers, handle the standby and upgrade lists, and give the safety demonstrations then fly the plane. AI can help with such support functions.

But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work. Understanding where they can let go to unlock time is essential, as is collaborating with technologists to guide truly useful development.

Perhaps it makes sense to start with automated interpretation of basic labs, dose adjustment for given medications, speech-to-text tools that simplify transcription or document face-to-face interactions, or even automate wound closure. And then move on from there.

It will be important for physicians and patients to engage and help define the evolution of automation in medicine in order to protect patient care. And physicians must be open to how new roles for them can be created by rapidly advancing technology.

If it all sounds a bit dreamy, I offer an instructive footnote about experimentation with AlphaGo AI. The recent game summit proving AlphaGos prowess also demonstrated that human talent increases significantly when paired with AI. This hybrid model of humans and machines working together presents a scalable automation paradigm for medicine, one that creates new tasks and roles for essential medical and technology professionals, increasing the capabilities of the entire field as we move forward.

Physicians should embrace this opportunity rather than fear it. Its time to rage with the machine.

Jack Stockert, M.D., is a managing director and leader of strategy and business development at Health2047, a Silicon Valley-based innovation company.

Trending

White nationalists are flocking to genetic ancestry tests. Some

White nationalists are flocking to genetic ancestry tests. Some dont like what they find

Democrats in Congress to explore creating an expert panel

Democrats in Congress to explore creating an expert panel on Trumps mental health

Trump wasnt always so linguistically challenged. What could explain

Trump wasnt always so linguistically challenged. What could explain the change?

Recommended

A new survey says doctors are warming up to

A new survey says doctors are warming up to single-payer health care

Beyond aggravation: Constipation is an American epidemic

Beyond aggravation: Constipation is an American epidemic

Express Scripts to limit opioids, concerning doctors

Express Scripts to limit opioids, concerning doctors

More:

Artificial intelligence is coming to medicine don’t be afraid – STAT


12345...102030...