Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[4] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[8][9] followed by disappointment and the loss of funding (known as an "AI winter"),[10][11] followed by new approaches, success and renewed funding.[9][12] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[13] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[14] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[15][16][17] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[13]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14] General intelligence is among the field's long-term goals.[18] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[19] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[20] Some people also consider AI to be a danger to humanity if it progresses unabated.[21], [22]. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[23]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][12]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[25] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[26] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[20]

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[27] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to "whether or not it is possible for machinery to show intelligent behaviour".[28] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".

The field of AI research was born at a workshop at Dartmouth College in 1956,[30] where the term "Artificial Intelligence" was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.[31] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[32] They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (c. 1954)[34] (and by 1959 were reportedly playing better than the average human),[35] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[36] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[37] and laboratories had been established around the world.[38] AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation... the problem of creating 'artificial intelligence' will substantially be solved".[8]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter",[10] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[40] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[9] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[11]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[24] The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[41] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[44] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[45] as do intelligent personal assistants in smartphones.[46] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][47] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[48] who at the time continuously held the world No. 1 ranking for two years.[49][50] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[51] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[51] In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".[52][53] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower".[54][55] However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.[56][57][58]

Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] A more elaborate definition characterizes AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[59]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[1] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Do mathematically similar actions to the ones succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food. Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[62]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world[citation needed]. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial.[64] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[66]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[68]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Learners also work on the basis of "Occam's razor": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.[c][71][72][73]

Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "nave physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)[76][77][78] This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[79][80][81]

The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.[82]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[14]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[83] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[84]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.[64] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.[85]

Knowledge representation[86] and knowledge engineering[87] are central to classical AI research. Some "expert systems" attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the "commonsense knowledge" known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[88] situations, events, states and time;[89] causes and effects;[90] knowledge about knowledge (what we know about what other people know);[91] and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[92] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[93] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[94] scene interpretation,[95] clinical decision support,[96] knowledge discovery (mining "interesting" and actionable inferences from large databases),[97] and other areas.[98]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[105] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or "value") of available choices.[106]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[107] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[108]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[109]

Machine learning (ML), a fundamental concept of AI research since the field's inception,[110] is the study of computer algorithms that improve automatically through experience.[111][112]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[112] Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[113] In reinforcement learning[114] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[115] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[116] and machine translation.[117] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. "Keyword spotting" strategies for search are popular and scalable but dumb; a search query for "dog" might only match documents with the literal word "dog" and miss a document with the word "poodle". "Lexical affinity" strategies use the occurrence of words such as "accident" to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of "narrative" NLP is to embody a full understanding of commonsense reasoning.[118]

Machine perception[119] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[120] facial recognition, and object recognition.[121] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.[122]

AI is heavily used in robotics.[123] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[124] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient's breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[126][127] Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[128][129] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[130]

Moravec's paradox can be extended to many forms of social intelligence.[132][133] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[134] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[138]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[139] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[140]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).[141] Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[18][142] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[143][144][145] Besides transfer learning,[146] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI. Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[148][149]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). A problem like machine translation is considered "AI-complete", because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[150] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[15]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[16]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[151] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the mid 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI "good old fashioned AI" or "GOFAI".[152] During the 1960s, symbolic approaches had achieved great success at simulating high-level "thinking" in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[153]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[154][155]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.[15] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[156] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[157]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[158] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[16] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[159]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[160] This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[40] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[161] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[17] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[162] Their work revived the non-symbolic point of view of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[163][164]

Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle of the 1980s.[167] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, Grey system theory, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[168]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new "statistical learning" techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring a semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[41][169] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed many tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[179] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[180] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[181] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[124] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[182] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[183] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[184]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[185] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[186][187]

Logic[188] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[189] and inductive logic programming is a method for learning.[190]

Several different forms of logic are used in AI research. Propositional logic[191] involves truth functions such as "or" and "not". First-order logic[192] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as "if you are close to the destination station and moving fast, increase the train's brake pressure"; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][194][195]

Default logics, non-monotonic logics and circumscription[100] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[88] situation calculus, event calculus and fluent calculus (for representing events and time);[89] causal calculus;[90] belief calculus (belief revision);[196] and modal logics.[91] Logics to model contradictory or inconsistent statements arising in multi-agent systems have also been designed, such as paraconsistent logics.

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[197]

Bayesian networks[198] are a very general tool that can be used for various problems: reasoning (using the Bayesian inference algorithm),[199] learning (using the expectation-maximization algorithm),[f][201] planning (using decision networks)[202] and perception (using dynamic Bayesian networks).[203] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[203] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other "loops" (undirected cycles) can require a sophisticated method such as Markov chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are "evidence" of how good a player is[citation needed]. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[205] and information value theory.[106] These tools include models such as Markov decision processes,[206] dynamic decision networks,[203] game theory and mechanism design.[207]

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[208]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[209] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[211]k-nearest neighbor algorithm,[g][213]kernel methods such as the support vector machine (SVM),[h][215]Gaussian mixture model,[216] and the extremely popular naive Bayes classifier.[i][218] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as "naive Bayes" on most practical data sets.[219]

Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), cast a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The neural network forms "concepts" that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning "leg" might be coupled with a subnetwork meaning "foot" that includes the sound for "foot". Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks can learn both continuous functions and, surprisingly, digital logical operations. Neural networks' early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[222][223]

The study of non-learning artificial neural networks[211] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others[citation needed].

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[224] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ("fire together, wire together"), GMDH or competitive learning.[225]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[226][227] and was introduced to neural networks by Paul Werbos.[228][229][230]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[231]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches[citation needed]. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[232]

Deep learning is any artificial neural network that can learn a long chain of causal links[dubious discuss]. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a "credit assignment path" (CAP) depth of seven[citation needed]. Many deep learning systems need to be able to learn chains ten or more causal links in length.[233] Deep learning has transformed many important subfields of artificial intelligence[why?], including computer vision, speech recognition, natural language processing and others.[234][235][233]

According to one overview,[236] the expression "Deep Learning" was introduced to the machine learning community by Rina Dechter in 1986[237] and gained traction afterIgor Aizenberg and colleagues introduced it to artificial neural networks in 2000.[238] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[239][pageneeded] These networks are trained one layer at a time. Ivakhnenko's 1971 paper[240] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[242]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[243] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[244]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[233]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's "AlphaGo Lee", the program that beat a top Go champion in 2016.[245]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[246] which are in theory Turing complete[247] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[233] RNNs can be trained by gradient descent[248][249][250] but suffer from the vanishing gradient problem.[234][251] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[252]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[253] LSTM is often trained by Connectionist Temporal Classification (CTC).[254] At Google, Microsoft and Baidu this approach has revolutionized speech recognition.[255][256][257] For example, in 2015, Google's speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[258] Google also used LSTM to improve machine translation,[259] Language Modeling[260] and Multilingual Language Processing.[261] LSTM combined with CNNs also improved automatic image captioning[262] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[263] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[264][265] Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[266] Moravec's paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[130]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[267][268] E-sports such as StarCraft continue to provide additional public benchmarks.[269][270] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[271]

The "imitation game" (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[272] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[274][275]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[278] prediction of judicial decisions,[279] targeting online advertisements, [280][281] and energy storage[282]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[283] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[284]

AI can also produce Deepfakes, a content-altering technology. ZDNet reports, "It presents something that did not actually occur, Though 88% of Americans believe Deepfakes can cause more harm than good, only 47% of them believe they can be targeted. The boom of election year also opens public discourse to threats of videos of falsified politician media. [285]

AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high risk patients for population health. The breadth of applications is rapidly increasing.As an example, AI is being applied to the high cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[286]

Artificial intelligence is assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[287] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called "Hanover"[citation needed]. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[288] Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[289] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[290]

According to CNN, a recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[291] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson has struggled to achieve success and adoption in healthcare.[292]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of self-driving cars. A few companies involved with AI include Tesla, Google, and Apple.[293]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[294]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[295] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren't entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[296]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[297] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[298]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car's main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[299] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorized use of debit cards.[300] Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[301] In August 2001, robots beat humans in a simulated financial trading competition.[302] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[303][304][305]

AI is also being used by corporations. Whereas AI CEO's are still 30 years away,[306][307] robotic process automation (RPA) is already being used today in corporate finance. RPA uses artificial intelligence to train and teach software robots to process transactions, monitor compliance and audit processes automatically.[308]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[309] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades[citation needed]. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient[citation needed]. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking[citation needed].. In August 2019, the AICPA introduced AI training course for accounting professionals.[310]

Artificial intelligence paired with facial recognition systems may be used for mass surveillance. This is already the case in some parts of China.[311][312] An artificial intelligence has also competed in the Tama City mayoral elections in 2018.

In 2019, the tech city of Bengaluru in India is set to deploy AI managed traffic signal systems across the 387 traffic signals in the city. This system will involve use of cameras to ascertain traffic density and accordingly calculate the time needed to clear the traffic volume which will determine the signal duration for vehicular traffic across streets.[313]

See the original post:

Artificial intelligence - Wikipedia

Posted in Ai

Here’s what AI experts think will happen in 2020 – The Next Web

Its been another great year for robots. We didnt quite figure out how to imbue them with human-level intelligence, but we gave it the old college try and came up with GPT-2 (the text generator so scary it gives Freddy Krueger nightmares) and the AI magic responsible for these adorable robo-cheetahs:

But its time to let the past go and point our bows toward the future. Its no longer possible to estimate how much the machine learning and AI markets are worth, because the line between whats an AI-based technology and what isnt has become so blurred that Apple, Microsoft, and Google are all AI companies that also do other stuff.

Your local electricity provider uses AI and so does the person who takes those goofy real-estate agent pictures you see on park benches. Everything is AI an axiom thatll become even truer in 2020.

We solicited predictions for the AI industry over the next year from a panel of experts, heres what they had to say:

AI and human will collaborate. AI will not replace humans, it will collaborate with humans and enhance how we do things. People will be able to provide higher level work and service, powered by AI. At Intuit, our platform allows experts to connect with customers to provide tax advice and help small businesses with their books in a more accurate and efficient way, using AI. It helps work get done faster and helps customers make smarter financial decisions. As experts use the product, the product gets smarter, in turn making the experts more productive. This is the decade where, through this collaboration, AI will enhance human abilities and allow us to take our skills and work to a new level.

AI will eat the world in ways we cant imagine today: AI is often talked about as though it is a Sci-Fi concept, but it is and will continue to be all around us. We can already see how software and devices have become smarter in the past few years and AI has already been incorporated into many apps. AI enriched technology will continue to change our lives, every day, in what and how we operate. Personally, I am busy thinking about how AI will transform finances I think it will be ubiquitous. Just the same way that we cant imagine the world before the internet or mobile devices, our day-to-day will soon become different and unimaginable without AI all around us, making our lives today seem so obsolete and full of unneeded tasks.

We will see a surge of AI-first apps: As AI becomes part of every app, how we design and write apps will fundamentally change. Instead of writing apps the way we have during this decade and add AI, apps will be designed from the ground up, around AI and will be written differently. Just think of CUI and how it creates a new navigation paradigm in your app. Soon, a user will be able to ask any question from any place in the app, moving it outside of a regular flow. New tools, languages, practices and methods will also continue to emerge over the next decade.

We believe 2020 to be the year that industries that arent traditionally known to be adopters of sophisticated technologies like AI, reverse course. We expect industries like waste management, oil and gas, insurance, telecommunications and other SMBs to take on projects similar to the ones usually developed by the tech giants like Amazon, Microsoft and IBM. As the enterprise benefits of AI become more well-known, the industries outside of Silicon Valley will look to integrate these technologies.

If companies dont adapt to the current trends in AI, they could see tough times in the future. Increased productivity, operational efficiency gains, market share and revenue are some of the top line benefits that companies could either capitalize or miss out on in 2020, dependent on their implementation. We expect to see a large uptick in technology adoption and implementation from companies big and small as real-world AI applications, particularly within computer vision, become more widely available.

We dont see 2020 as another year of shiny new technology developments. We believe it will be more about the general availability of established technologies, and thats ok. Wed argue that, at times, true progress can be gauged by how widespread the availability of innovative technologies is, rather than the technologies themselves. With this in mind, we see technologies like neural networks, computer vision and 5G becoming more accessible as hardware continues to get smaller and more powerful, allowing edge deployment and unlocking new use cases for companies within these areas.

2020 is the year AI/ML capabilities will be truly operationalized, rather than companies pontificating about its abilities and potential ROI. Well see companies in the media and entertainment space deploy AI/ML to more effectively drive investment and priorities within the content supply chain and harness cloud technologies to expedite and streamline traditional services required for going to market with new offerings, whether that be original content or Direct to Consumer streaming experiences.

Leveraging AI toolsets to automate garnering insights into deep catalogs of content will increase efficiency for clients and partners, and help uphold the high-quality content that viewers demand. A greater number of studios and content creators will invest and leverage AI/ML to conform and localize premium and niche content, therefore reaching more diverse audiences in their native languages.

Im not an industry insider or a machine learning developer, but I covered more artificial intelligence stories this year than I can count. And I think 2019 showed us some disturbing trends that will continue in 2020. Amazon and Palantir are poised to sink their claws into the government surveillance business during what could potentially turn out to be President Donald Trumps final year in office. This will have significant ramifications for the AI industry.

The prospect of an Elizabeth Warren or Bernie Sanders taking office shakes the Facebooks and Microsofts of the world to their core, but companies who are already deeply invested in providing law enforcement agencies with AI systems that circumvent citizen privacy stand to lose even more. These AI companies could be inflated bubbles that pop in 2021, in the meantime theyll look to entrench with law enforcement over the next 12 months in hopes of surviving a Democrat-lead government.

Look for marketing teams to get slicker as AI-washing stops being such a big deal and AI rinsing disguising AI as something else becomes more common (ie: Ring is just a doorbell that keeps your packages safe, not an AI-powered portal for police surveillance, wink-wink).

Heres hoping your 2020 is fantastic. And, if we can venture a final prediction: stay tuned to TNW because were going to dive deeper into the world of artificial intelligence in 2020 than ever before. Its going to be a great year for humans and machines.

Read next: Samsung reveals S10 and Note 10 Lite, its new budget flagships

Read the rest here:

Here's what AI experts think will happen in 2020 - The Next Web

Posted in Ai

What Chess Can Teach Us About the Future of AI and War – War on the Rocks

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.), which asks how will artificial intelligence affect the character and/or the nature of war.


Will artificial intelligence (AI) change warfare? Its hard to say. AI itself is not new the first AI neural network was designed in 1943. But AI as a critical factor in competitions is relatively novel and, as a result, theres not much data to draw from. However, the data that does exist is striking. Perhaps the most interesting examples are in the world of chess. The game has been teaching military strategists the ways of war for hundreds of years and has been a testbed for AI development for decades.

Military officials have been paying attention. Deputy Defense Secretary Robert Work famously used freestyle (or Centaur) chess to promote the third offset strategy, where humans and computers work together, combining human strategy and computer speed to eliminate blunders while allowing humans to focus on the big picture. Since then, AI and supercomputers have continued to reshape how chess is played. Technology has helped to level the playing field the side with the weaker starting position is no longer at such a disadvantage. Likewise, intimidation from the threat of superhuman computers has occasionally led to some unorthodox behaviors, even in human-only matches.

The experience of AI in the chess world should be instructive for defense strategists. As AI enters combat, it will first be used just in training and in identifying mistakes before they are made. Next, improvements will make it a legitimate teammate, and if it advances to superhuman ability in even narrow domains of warfighting, as it has in chess then it could steer combat in directions that are unpredictable for both humans and machines.

What Does Chess Say About AI-Human Interaction?

Will AI replace soldiers in war? The experience of using AI and machine learning in chess suggests not. Even though the best chess today is played by computers alone, humans remain the focus of the chess world. The world computer chess championship at the International Conference on Machine Learning in Stockholm attracted a crowd of only three when I strolled by last year. In contrast, the human championship was streamed around the globe to millions. In human-only chess though, AI features heavily in the planning process, the results of which are called prep. Militaries are anticipating a similar planning role for AI, and even automated systems without humans rely on a planning process to provide prep for the machines. The shift toward AI for that process will affect how wars are fought.

To start, computers are likely to have an equalizing effect on combat as they have had in chess. The difference in ability among the top competitors in chess has grown smaller, and the advantage of moving first has become less advantageous. That was evident in last years human-only chess championship where competitors had the closest ratings ever in a championship, and the best-of-12 match had 12 straight draws for the first time. There have been more draws than wins in every championship since 2005, and though it is not exactly known why, many believe it is due to the influence of superhuman computers aiding underdogs, teaching defensive play, or simply perfecting the game.

AI is likely to level the military playing field because progress is being driven by commercial industry and academia which will likely disseminate their developments more widely than militaries. That does not guarantee all militaries will benefit equally. Perhaps some countries could have better computers or will be able to pay for more of them, or have superior data to train with. But the open nature of computing resources makes cutting-edge technology available to all, even if that is not the only reason for equalization.

AI Favors the Underdog and Increases Uncertainty

AI seems to confer a distinct benefit to the underdog. In chess, black goes second and is at a significant disadvantage as a result. Fabiano Caruana, a well-known American chess player, claimed that computers are benefiting black. He added that computer analysis helps reveal many playable variations and moves that were once considered dubious or unplayable. In a military context, the ways to exert an advantage can be relatively obvious, but AI planning tools could be adept at searching and evaluating the large space of possible courses of action for the weaker side. This would be an unwelcome change for the United States, which has benefited from many years of military superiority.

Other theories exist for explaining the underdogs improvement in chess. It may be that computers are simply driving chess toward its optimum outcome, which some argue is a tie. In war it could instead be that perfect play leads to victory rather than a draw. Unlike chess, the competitors are not constrained to the same pieces or set of moves. Then again, in a limited war where mass destruction is off the table, both sides aim to impose their will while restricting their own pieces and moves. If perfect play in managing escalation does lead to stalemate, then AI-enhanced planning or decision-making could drive toward that outcome.

However, superhuman computers do not always drive humans toward perfect play and can in fact drive them away from it. This happened in a bizarre turn in last years chess world championship, held in London. The Queens Gambit Declined, one of the most famous openings that players memorize, was used to kick off the second of the 12 games in the London match, but on the tenth move, the challenger, Caruana, playing as black, didnt choose either of the standard next moves in the progression. During planning, his computers helped him find a move that past centuries had all but ignored. When the champion Magnus Carlsen, who is now the highest-rated player in history, was asked how he felt upon seeing the move, he recounted being so worried that his actual response cant be reproduced here.

It is not so much that Caruana had found a new move that was stronger than the standard options. In fact, it may have even been weaker. But it rattled Carlsen because, as he said, The difference now is that Im facing not only the analytical team of Fabiano himself and his helpers but also his computer help. That makes the situation quite a bit different. Carlsen suddenly found himself in a theater without the aid of electrical devices, having only his analytical might against what had become essentially a superhuman computer opponent.

His response might presage things to come in warfare. The strongest moves available to Carlsen were ones that the computer would have certainly analyzed and his challenger would have prepared for. Therefore, Carlsens best options were either ones that were certainly safe or ones that were strange enough that they would not have been studied by the computer.

When asked afterward if he had considered a relatively obvious option that he didnt chose seven moves later in the game, Carlsen joked that Yeah, I have some instincts I figured that [Caruana] was still in prep and that was the perfect combination. Fear of the computer drove the champion, arguably historys best chess player, to forego a move that appeared to be the perfect combination in favor of a safer defensive position, a wise move if Caruana was in fact still in prep.

In war, there will be many options for avoiding the superhuman computing abilities of an adversary. A combatant without the aid of advanced technology may choose to withdraw or retreat upon observing the adversary doing something unexpected. Alternatively, the out-computed combatant might drive the conflict toward unforeseen situations where data is limited or does not exist, so as to nullify the role of the computer. That increases uncertainty for everyone involved.

How Will the U.S. Military Fare in a Future AI World?

The advantage may not always go the competitor with the most conventional capabilities or even the one that has made the most computing investment. Imagine the United States fighting against an adversary that can jam or otherwise interfere with communications to those supercomputers. Warfighters may find themselves, like Carlsen, in a theater without the aid of their powerful AI, up against the full analytical might of the adversary and their team of computers. Any unexpected action taken by the adversary at that point (e.g., repositioning their ground troops or launching missile strikes against unlikely locations) would be cause for panic. The natural assumption would be that adversary computers found a superior course of action that had accounted for the most likely American responses many moves into the future. The best options then, from the U.S. perspective, become those that are either extremely cautious, or those that are so unpredictable that they would not have been accounted for by either side.

AI-enabled computers might be an equalizer to help underdogs find new playable options. However, this isnt the only lesson that chess can teach us about the impact of AI-enabled supercomputers and war. For now, while humans still dominate strategy, there will still be times where the computer provides advantages in speed or in avoiding blunders. When the computer overmatch becomes significant and apparent, though, strange behaviors should be expected from the humans.

Ideally, humans deprived of their computer assistants would retreat or switch to safe and conservative decisions only. But the rules of war are not as strict as the rules of chess. If an enemy turns out to be someone aided by feckless computers, instead of superhuman computers aided by feckless humans, it may be wise to anticipate more inventive perhaps even reckless human behavior.

Andrew Lohn is a senior information scientist at the nonprofit, nonpartisan RAND Corporation. His research topics have included military applications of AI and machine learning. He is also co-author of How Might Artificial Intelligence Affect the Risk of Nuclear War? (RAND, 2018).

Image: U.S. Marine Corps (Photo by Lance Cpl. Scott Jenkins)

Follow this link:

What Chess Can Teach Us About the Future of AI and War - War on the Rocks

Posted in Ai

Adobe CTO says AI will ‘democratize’ creative tools – TechCrunch

Adobe CTO Abhay Parasnis sees a shift happening.

A shift in how people share content and who wants to use creative tools. A shift in how users expect these tools to work especially how much time they take to learn and how quickly they get things done.

I spoke with Parasnis in December to learn more about where Adobes products are going and how theyll get there even if it means rethinking how it all works today.

What could we build that makes todays Photoshop, or todays Premiere, or todays Illustrator look irrelevant five years from now? he asked.

In many cases, that means a lot more artificial intelligence; AI to flatten the learning curve, allowing the user to command apps like Photoshop not only by digging through menus, but by literally telling Photoshop what they want done (as in, with their voice). AI to better understand what the user is doing, helping to eliminate mundane or repetitive tasks. AI to, as Parasnis puts it, democratize Adobes products.

Weve seen some hints of this already. Back in November, Adobe announced Photoshop Camera, a free iOS/Android app that repurposes the Photoshop engine into a lightweight but AI-heavy interface that allows for fancy filters and complex effects with minimal effort or learning required of the user. I see it as Adobes way of acknowledging (flexing on?) the Snapchats and Instas of the world, saying oh, dont worry, we can do that too.

But the efforts to let AI do more and more of the heavy lifting wont stop with free apps.

We think AI has the potential to dramatically reduce the learning curve and make people productive not at the edges, but 10x, 100x improvement in productivity, said Parasnis.

The last decade or two decades of creativity were limited to professionals, people who really were high-end animators, high-end designers why isnt it for every student or every consumer that has a story to tell? They shouldnt be locked out of these powerful tools only because theyre either costly, or they are more complex to learn. We can democratize that by simplifying the workflow.

Read more:

Adobe CTO says AI will 'democratize' creative tools - TechCrunch

Posted in Ai

AI computing will enter the ‘land of humans’ in the 2020s: The promise and the peril | TheHill – The Hill

Indisputably, computers in their myriad forms helped improve our lives in the last century, and especially in the past decade. Much of our interaction with computers, however, has long been stilted and unnatural.

The means of natural interaction we evolved for human communication generally were not of much use in dealing with computers. We had to enter their land to get our work done be it typing, clicking buttons or editing spreadsheets. While our productivity increased, so did the time we spend in these unnatural modes of interaction. Communicating with computers sometimes is such a soul-draining activity that, over time, we even created special classes of computer data-entry positions.

Thanks to recent strides in artificial intelligence (AI) especially in perceptual intelligence this is going to change drastically in coming years, with computers entering our land, instead of the other way around.They will be able to hear us, to speak back to us, to see us and to show us back. In an ironic twist, these "advanced" capabilities finally will allow us to be ourselves, and to have computers deal with us in modes of interaction that are natural to us.

We won't need to type to them or to speak in stilted, halting voices. This will make computer assistants and decision-support systems infinitely more human-friendly as witnessed by the increasing popularity of "smart speakers." As computers enter the land of humans, we might even reclaim some of our lost arts, such as cursive script, since it will become as easy for computers to recognize handwriting as it is for humans.

Granted, the current recognition technology still has many limitations but the pace of improvement has been phenomenal. Despite having done an undergraduate thesis on speech recognition, I have scrupulously avoided most all the dictation/transcription technologies. Recently, however, the strides in voice transcription have been quite remarkable even for someone with my accent. In fact, Iused Pixel 4 Recorder to transcribe my thoughts for this article!

Beyond the obvious advantages of easy communication with computer assistants, their entry into our land has other important benefits.

For a long time now, computers have foisted a forced homogenization among the cultures and languages of the world. Whatever your mother tongue, you had to master some pidgin English to enter the land of computers. In the years to come, however, computers can unify us in all our diversity, without forcing us to lose our individuality. We can expect to see a time when two people can speak in their respective mother tongues and understand each other, thanks to real-time AI transcription technology that rivals the mythicalBabel Fishfrom The Hitchhikers Guide to the Galaxy." Some baby steps towards this goal are already being taken. I have a WeChat account to keep in touch with friends from China; they all communicate in Chinese, and I still get a small percentage of their communications thanks to the "translate" button.

Seeing and hearing the world as we do will allow computers to take part in many other quotidian aspects of our lives beyond human-machine communication. While self-driving cars still may not be here this coming decade, we certainly will have much more intelligent cars that see the road and the obstacles, hear and interpret sounds and directions, the way we do, and thus provide much better assistance to us in driving. Similarly, physicians will have access to intelligent diagnostic technology that can see and hear the way they themselves do, thus making their jobs much easier and less time-consuming (and giving them more time for interaction with patients!).

Of course, to get computers to go beyond recognition and see the world the way we do, we still have some hard AI problems to solve including giving computers the common sense that we humans share, and the ability to model the mental states of those humans who are in the loop. The current pace of progress makes me optimistic that we will make important breakthroughs on these problems within this decade.

There is, of course, a flip side. Until now it was fairly easy for us to figure out whether we are interacting with a person or a computer, be it the stilted prose or robotic voice of the latter. As computers enter our land with natural interaction modalities, they can have significant impact on our perception of reality and human relations. As a species, we already are acutely susceptible to the sin of anthropomorphization. Computer scientist and MIT professor Joseph Weizenbaum is said to have shut down his Eliza chatbot when he was concerned that the office secretaries were typing their hearts out to it. Already, modern chatbots such as Woebot are rushing onto the ground where Weizenbaum feared to tread.

Imagine the possibilities when our AI-enabled assistants don't rely on us typing but, instead, can hear, see and talk back to us.

There also are the myriad possibilities of synthetic reality. In order to give us some ability to tell whether we are interacting with a computer or the reality it generated, there are calls to have AI assistants voluntarily identify themselves as such when interacting with humans ironic, considering all of the technological steps we took to get the computers into our land in the first place.

Thanks to the internet of things (IoT) and 5G communication technologies, computers that hear and see the world the way we do can also be weaponized to provide surveillance at scale. Surveillance in the past required significant human power. With improved perceptual recognition capabilities, computers can provide massive surveillance capabilities without requiring much human power.

Its instructive remember a crucial difference between computers and humans: When we learn a skill, there is no easy way to instantly transfer it to others we dont have USB connectors to our brains. In contrast, computers do, and thus when they enter our land, they enter all at once.

Even an innocuous smart speaker in our home can invade our privacy. This alarming trend is already seen in some countries such as China, where the idea of privacy in the public sphere is becoming increasingly quaint. Countering this trend will require significantvigilance and regulatory oversight from civil society.

After a century of toiling in the land of computers, we finally will have them come to our land, on our terms. If language is the soul of a culture, our computers will start having first glimpses of our human culture. The coming decade will be a test of how we will balance the many positive impacts of this capability on productivity and quality of life with its harmful or weaponized aspects.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter@rao2z.


AI computing will enter the 'land of humans' in the 2020s: The promise and the peril | TheHill - The Hill

Posted in Ai

Overcoming Racial Bias In AI Systems And Startlingly Even In AI Self-Driving Cars – Forbes

AI systems can have embedded biases, including in AI self-driving cars.

The news has been replete with stories about AI systems that have been regrettably and dreadfully exhibiting various adverse biases including racial bias, gender bias, age discrimination bias, and other lamentable prejudices.

How is this happening?

Initially, some pointed fingers at the AI developers that craft AI systems.

It was thought that their own personal biases were being carried over into the programming and the AI code that is being formulated. As such, a call for greater diversity in the AI software development field was launched and efforts to achieve such aims are underway.

Turns out though that it isnt only the perspectives of the AI programmers that are necessarily the dominant factor involved, and many began to realize that the algorithms being utilized were a significant element.

There is yet another twist.

Many of the AI algorithms used for Machine Learning (ML) and Deep Learning (DL) are essentially doing pattern matching, and thus if the data being used to train or prepare an AI system contains numerous examples with inherent biases in them, theres a solid chance those will be carried over into the AI system and how it ultimately performs.

In that sense, its not that the algorithms are intentionally generating biases (they are not sentient), while instead, it is the subtle picking up of mathematically hidden biases via the data being fed into the development of the AI system thats based on relatively rudimentary pattern matching.

Imagine a computer system that had no semblance about the world and you repeatedly showed it a series of pictures of people standing and looking at the camera. Pretend that the pictures were labeled as to what kind of occupations they held.

Well use the pictures as the data that will be fed into the ML/DL.

The algorithm thats doing pattern matching might computationally begin to calculate that if someone is tall then they are a basketball player.

Of course, being tall doesnt always mean that a person is a basketball player and thus already the pattern matching is creating potential issues as to what it will do when presented with new pictures and asked to classify what the person does for a living.

Realize too that there are two sides to that coin.

A new picture of a tall person gets a suggested classification of being a basketball player. In addition, a new picture of a person that is not tall will be unlikely to get a suggested classification of being a basketball player (therefore, the classification approach will be inclusive and furthermore tend toward being exclusionary).

In lieu of using height, the pattern matching might calculate that if someone is wearing a sports jersey, they are a basketball player.

Once again, this presents issues since the wearing of a sports jersey is not a guarantee of being a basketball player, nor necessarily that someone is a sports person at all.

Among the many factors that be explored, it could be that the pattern matching opts to consider the race of the people in the pictures and uses that as a factor in finding patterns.

Depending upon how many pictures contain people of various races, the pattern matching might calculate that a person in occupation X is associated with being a race of type R.

As a result, rather than using height or sports jerseys or any other such factors, the algorithm landed on race as a key element and henceforth will use that factor when trying to classify newly presented pictures.

If you then put this AI system into use, and you have it in an app that lets you take a picture of yourself and ask the app what kind of occupation you are most suited for, consider the kind of jobs it might suggest for someone, doing so in a manner that would be race biased.

Scarier still is that no one might realize how the AI system is making its recommendations and the race factor is buried within the mathematical calculations.

Your first reaction to this might be that the algorithm is badly devised if it has opted to use race as a key factor.

The thing is that many of the ML/DL algorithms are merely full-throttle examining all available facets of whatever the data contains, and therefore its not as though race was programmed or pre-established as a factor.

In theory, the AI developers and data scientists that are using these algorithms should be analyzing the results of the pattern matching to try and ascertain in what ways are the patterns being solidified.

Unfortunately, it gets complicated since the complexity of the pattern matching is increasing, meaning that the patterns are not so clearly laid out that you could readily realize that race or gender or other such properties were mathematically at the root of what the AI system has opted upon.

There is a looming qualm that these complex algorithms that are provided with tons of data are not able to explain or illuminate what factors were discovered and are being relied upon. A growing call for XAI, explainable AI, continues to mount as more and more AI systems are being fielded and underlay our everyday lives.

Heres an interesting question: Could AI-based true self-driving cars become racially biased (and/or biased in other factors such as age, gender, etc.)?

Sure, it could happen.

This is a matter that ought to be on the list of things that the automakers and self-driving tech firms should be seeking to avert.

Lets unpack the matter.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that kids are forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Biases

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Consider one important act of driving, namely the need to gauge what pedestrians are going to do.

When you drive your car around your neighborhood or downtown area, the odds are that you are looking at pedestrians that are standing at a corner and waiting to enter into the crosswalk, particularly when the crosswalk is not controlled by a traffic signal.

You carefully give a look at those pedestrians because you know from experience that sometimes a pedestrian will go into a crosswalk even when it is not safe for them to cross.

According to the NHTSA (National Highway Traffic Safety Administration), approximately 60% of pedestrian fatalities occur at crosswalks.

Consider these two crucial questions:

By what means do you decide whether a pedestrian is going to cross?

And, by what means do you decide to come to a stop and let a pedestrian cross?

There have been various studies that have examined these questions, and some of the research suggests that at times there are human drivers that will apparently make their decisions based on race.

In one such study by the NITC (National Institute for Transportation and Communities), an experiment was undertaken and revealed that black pedestrians were passed by twice as many cars and experienced wait times that were 32% longer than white pedestrians.

The researchers concluded that the results support the hypothesis that minority pedestrians experience discriminatory treatment by drivers.

Analysts and statisticians argue that you should be cautious in interpreting and making broad statements based on such studies since there are a number of added facets that come to play.

There is also the aspect of explicit bias versus implicit bias that enters into the matter.

Some researchers believe that a driver might not realize they hold such biases, being unaware explicitly, and yet might implicitly have such a bias and that in the split-second decision making of whether to keep driving through a crosswalk or stopping to let the pedestrian proceed there is a reactive and nearly subconscious element involved.

Put aside for the moment the human driver aspects and consider what this might mean when trying to train an AI system.

If you collected lots of data about instances of crosswalk crossing, which included numerous examples of drivers that choose to stop for a pedestrian to cross and those that dont stop, and you fed this data into an ML/DL, what might the algorithm land on as a pattern?

Based on the data presented, the ML/DL might computationally calculate that there are occasions when human drivers do and do not stop, and within that, there might be a statistical calculation potentially based on using race as a factor.

In essence, similar to the earlier example about occupations, the AI system might mindlessly find a mathematical pattern that uses race.

Presumably, if human drivers are indeed using such a factor, the chances of the pattern matching doing the same are likely increased, though even if human drivers arent doing so it could still become a factor by the ML/DL computations.

Thus, the AI systems that drive self-driving cars can incorporate biases in a myriad of ways, doing so as a result of being fed lots of data and trying to mathematically figure out what patterns seem to exist.

Figuring out that the AI system has come to that computational juncture is problematic.

If the ML/DL itself is essentially inscrutable, you have little chance of ferreting out the bias.

Another approach would be to do testing to try and discern that biases have crept into the AI system, yet the volume and nature of such testing is bound to be voluminous and might not be able to reveal such biases, especially if the biases are subtle and assimilated into other correlated factors.

Its a conundrum.

Dealing With The Concerns

Some would argue that the AI developers ought to forego using data and instead programmatically develop the code to detect pedestrians and decide whether to accede to their crossing.

Or, maybe just always come to a stop at a crosswalk for all pedestrians, thus presumably vacating any chance of an inherent bias.

Well, theres no free lunch in any of this.

Yes, directly programming the pedestrian detection and choice of crossing is indeed what many of the automakers and self-driving tech firms are doing, though this does not again guarantee that some form of biases wont be in the code.

Furthermore, the benefit of using ML/DL is that the algorithms are pretty much already available, and you dont need to write something from scratch. Instead, you pull together the data and feed it into the ML/DL. This is generally faster than the coding-from-scratch approach and might be more proficient and exceed what a programmer could otherwise write on their own.

In terms of the always coming to a stop approach, some automakers and self-driving tech firms are using this as a rule-of-thumb, though you can imagine that it tends to make other human drivers upset and become angered at self-driving cars (have you ever been behind a timid driver that always stops at crosswalks, its a good bet that you got steamed at such a driver), and might lead to an increase in fender benders as driverless cars keep abruptly coming to a stop.

Widening the perspective on AI and self-driving cars, keep in mind that the pedestrian at a crosswalk is merely one such example to consider.

Another commonly voiced concern is that self-driving cars are going to likely choose how to get to wherever a human passenger asks to go.

A passenger might request that the AI take them to the other side of town.

Suppose the AI system opts to take a route that avoids a certain part of the town, and then over and over again uses this same route. Gradually, the ML/DL might become computationally stuck-in-a-rut and always take that same path.

This could mean that parts of a town will never tend to see any self-driving cars roaming through their neighborhood.

Some worry that this could become a kind of bias or discriminatory practice by self-driving cars.

How could it happen?

Once again, the possibility of the data being fed into the AI system could be the primary culprit.

Enlarge the view even further and consider that all the self-driving cars in a fleet might be contributing their driving data to the cloud of the automaker or self-driving tech firm that is operating the fleet.

The hope is that by collecting this data from hundreds, or thousands, or eventually millions of driverless cars, it can be scanned and examined to presumably improve the driving practices of the self-driving cars.

Via the use of OTA (Over-The-Air) electronic communications, the data will be passed along up to the cloud, and whenever new updates or patches are needed in the self-driving cars they will be pushed down into the vehicles.

Ive already forewarned that this has the potential for a tremendous kind of privacy intrusion since you need to realize that a self-driving car is loaded with cameras, radar, LIDAR, ultrasonic, thermal, and other data-collecting devices and going to be unabashedly capturing whatever it sees or detects during a driving journey.

A driverless car that passes through your neighborhood and goes down your block will tend to record whatever is occurring within its detectable range.

There you are on your front lawn, playing ball with your kids, and the scene is collected onto video and later streamed up to the cloud.

Assuming that driverless cars are pretty much continuously cruising around to be available for those that need a ride, it could end-up allowing the possibility of knitting together our daily efforts and activities.

In any case, could the ML/DL that computationally pattern matches on this vast set of data be vulnerable to landing on inherently biased elements and then opt to use those by downloading updates into the fleet of driverless cars.



This description of a problem is one that somewhat predates the appearance of the problem.

There are so few self-driving cars on our roadways that theres no immediate way to know whether or not those driverless cars might already embody any kind of biases.

Until the number of self-driving cars gets large enough, we might not be cognizant of the potential problem of embedded and rather hidden computational biases.

Some people seem to falsely believe that AI systems have common sense and thus wont allow biases to enter into their thinking processes.

Nope, there is no such thing yet as robust common-sense reasoning for AI systems, at least not anywhere close to what humans can do in terms of employing common sense.

There are others that assume that AI will become sentient and presumably be able to discuss with us humans any biases it might have and then squelch those biases.

Sorry, do not hold your breath for the so-called singularity to arrive anytime soon.

For now, the focus needs to be on doing a better job at examining the data that is being used to train AI systems, along with doing a better job at analyzing what the ML/DL formulates, and also pursuing the possibility of XAI that might provide an added glimpse into what the AI system is doing.

Visit link:

Overcoming Racial Bias In AI Systems And Startlingly Even In AI Self-Driving Cars - Forbes

Posted in Ai

Investment in AI growing as health systems look to the future – Healthcare IT News

Investment in machine learning and artificial intelligence is ramping up across the healthcare industry as multiple players all look to tap into the benefits of deep neural networks and other forms of data-driven analysis.

A number of forward-looking provider organizations made strides with AI in 2019, including Summa Health, a nonprofit health system in Northeast Ohio, and Sutter Health, a health system based in Sacramento, California, to name just two.

Looking forward into 2020, administrative process improvements are expected to be an investment priority, including technologies to help automate business processes like administrative tasks or customer service.

Many in the healthcare ecosystem already are on their way. An October Optum survey of 500 U.S. health industry leaders from hospitals, health plans, life sciences and employers, found 22% of respondents are in the late stages of AI strategy implementation.

According to an Accenture report, growth in the AI healthcare market is expected to reach $6.6 billion by 2021 a compound annual growth rate of 40% and the analyst firm predicts that when combined, key clinical health AI applications could potentially create $150 billion in annual savings for the U.S. healthcare economy by 2026.

Return on investment will be the driving force for AI investments in 2020 for health systems, Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. I anticipate that 2020 will be a breakout year for AI investment but by that, I mean investment by health systems in the right types of AI-driven technology.

He said when health system leaders consider an AI-driven technology, especially in the emerging value-base care environment, they will give the highest priority to AI technologies that achieve the Institute for Healthcare Improvements Triple Aim: improving the patient experience of care, including quality and satisfaction; improving the health of populations; and reducing the per capita cost of healthcare.

Generally speaking, the most powerful and effective types of AI are leveraged to power technologies that bring true clinical and financial ROI such as digital therapeutics with AI-driven predictive analytics as well as a machine learning component, Rajput said. Digital therapeutics powered by AI enable more informed clinical decision making and earlier interventions.

For example, in patients diagnosed with heart failure, health systems can leverage digital therapeutics to follow them after discharge from the hospital or following an ER visit.

By applying AI-driven predictive analytics to non-clinical and clinical parameters collected via clinical-grade sensors worn by patients in their homes, providers can predict decompensation by detecting subtle physiologic changes from a participants personalized baseline, he added. This means interventions can occur two to three weeks earlier than they would have otherwise, potentially preventing a major medical crisis.

This real-world, rather than theoretical, application of AI also brings real-world ROI, which is attractive to clinical leaders such as CEOs, CIOs and CFOs when they are looking at potential investments in AI, he said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Read more:

Investment in AI growing as health systems look to the future - Healthcare IT News

Posted in Ai

European Patent Office Rejects Worlds First AI Inventor – Forbes

The European patent authorities have rejected an attempt to register an AI as an official inventor.

The possibility's been a subject of debate for some time, and last summer a group of legal experts decided to force the issue. The group, led by Professor Ryan Abbott of the University of Surrey, submitted designs developed by an AI to the authorities in the US, UK and Europe, and later Germany, Israel, Taiwan and China.

The AI concerned, named Dabus, was created by Stephen Thaler, and is described as a connectionist artificial intelligence.

According to its inventors, it 'relies upon a system of many neural networks generating new ideas by altering their interconnections. A second system of neural networks detects critical consequences of these potential ideas and reinforces them based upon predicted novelty and salience.'

It came up with two concepts submitted for patent approval: a new type of drinks container based on fractal geometry, and a device based on a flickering light for attracting attention during search and rescue operations.

"In these applications, the AI has functionally fulfilled the conceptual act that forms the basis for inventorship. There would be no question the AI was the only inventor if it was a natural person," said Abbott.

"The right approach is for the AI to be listed as the inventor and for the AIs owner to be the assignee or owner of its patents. This will reward innovative activities and keep the patent system focused on promoting invention by encouraging the development of inventive AI, rather than on creating obstacles."

However, the European Patent Office failed to agree.

"After hearing the arguments of the applicant in non-public oral proceedings on 25 November the EPO refused EP 18 275 163 and EP 18 275 174 on the grounds that they do not meet the requirement of the EPC that an inventor designated in the application has to be a human being, not a machine," it concluded.

Legal attitudes to AI inventors vary subtly around the world. In the UK, for example, the programmer who came up with the AI is the inventor; in the US, it's the person who came up with the original idea for the invention, with the programmer deemed simply to be facilitating it.

Patent ownership also involves certain responsibilities that an AI would struggle to satisfy, such as renewing patents, updating government records and keeping licensees informed.

While the EU did at one time consider adding 'electronic personality' to the two categories of potential patent owner allowed - 'natural person' and 'legal entity' - it abandoned the idea after receiving a strongly worded letter from more than 150 experts in AI, robotics, IP and ethics.

Read more:

European Patent Office Rejects Worlds First AI Inventor - Forbes

Posted in Ai

6 Predictions for the Future of Artificial Intelligence in 2020 – Adweek

The business worlds enthusiasm for artificial intelligence has been building towards a fever pitch in the past few years, but those feelings could get a bit more complicated in 2020.

Despite investment, research publications and job demand in the field continuing to grow through 2019, technologists are starting to come to terms with potential limitations in what AI can realistically achieve. Meanwhile, a growing movement is grappling with its ethics and social implications, and widespread business adoption remains stubbornly low.

As a result, companies and organizations are increasingly pushing tools that commoditize existing predictive and image recognition machine learning, making the tech easier to explain and use for non-coders. Emerging breakthroughs, like the ability to create synthetic data and open-source language processors that require less training than ever, are aiding these efforts.

At the same time, the use of AI for nefarious ends like deepfakes and the mass-production of spam are still in their earliest theoretical stages, and troubling reports indicate such dystopia may become more real in 2020.

Here are six predictions for the tech in this new year:

A high-profile research org called OpenAI grabbed headlines in early 2019 when it proclaimed its latest news-copy generating machine learning software, GPT-2, was too dangerous to publicly release in full. Researchers worried the passably realistic-sounding text generated by GPT-2 would be used for the mass-generation of fake news.

GPT-2 is the most sophisticated of a new type of language generation. It involves a base program trained on a massive dataset. In GPT-2s case, it trains on more than 8 million websites to understand the general mechanics of how language works. That foundational system can then be trained on a relatively smaller, more specific dataset to mimic a certain style for uses like predictive text, chatbots or even creative writing aids.

OpenAI ended up publishing the full version of the model in November. It called attention to the excitingif sometimes unsettlingpotential of a growing trend in a subfield of AI called natural language processing, the ability to parse and produce natural-sounding human language.

The resource and accessibility breakthrough is analogous to a similar milestone in the subfield of computer vision around 2012, one widely credited with spawning the surge in image and facial recognition AI of the last few years. Some researchers think natural language tech is rumored to be poised for a similar boom in the next year or so. Its now starting to emerge, Tsung-Hsien Wen, chief technology officer at a chatbot startup called PolyAI, said of this possibility.

Ask any data scientist or company toiling over a nascent AI strategy what their biggest headache is and the answer will likely involve data. Machine learning systems perform only as well as the data on which theyre trained, and the scale at which they require it is massive.

One reprieve from this insatiable need may come from an unexpected place: an emergent new machine learning model currently best known for its role in deepfakes and AI-generated art. Patent applications indicate that brands explored all kinds of uses for this tech, known as a generative adversarial network (GAN), in 2019. But one of its unsung, yet potentially most impactful, talents is its ability to pad out a dataset with mass-produced fake data thats similar but slightly varied from the original material.

What happens here is that you try to complement a set of data with another kind of data that may not be exactly what youve observedthat could be made upbut that are trustworthy enough to be used in a machine learning environment, said Gartner analyst Erick Brethenoux.

Continue Reading

Read the original here:

6 Predictions for the Future of Artificial Intelligence in 2020 - Adweek

Posted in Ai

Opinion: AI, privacy and APIs will mould digital health in 2020 – MobiHealthNews

About the author:Anish Sebastianco-founded Babyscripts in 2013, which has partnered with dozens of health systems for its data-centric model in prenatal care. As the CEO of the startup, Anish has focused his efforts on product and software development, as well as evidence-based validation of their product. Prior to this, he founded a research analytics startup and served as a senior tech consultant at Deloitte.

Last month saw the rollout of the latest upgrades to Amazons Echo speaker line: earbuds, glasses and a ring that connect to Amazons personal assistant Alexa. These new products are just three examples of a growing trend to incorporate technology seamlessly into our human experience, representing the ever-expanding frontiers for technology that have moved far past the smartphone.

These trends and others are going to make a big impact in the healthcare space, especially as providers, payers and consumers alike slowly but surely recognize the need to incorporate tech into their workflows to meet the growing consumer demand for digital health tools. At the same time, the data-hungry nature of these innovations is creating its own problems, driving a discussion around privacy and security that is louder and more urgent than ever.

Here are three trends to look out for in the coming year:

Its been quite a few years since AI has emerged from the pages of science fiction into our day-to-day reality, and the healthcare industry has provided a fertile proving ground for all aspects of its innovations. From software that analyzes medical data to identify patients for clinical trials in minutes, to software that analyzes medical images to diagnose tumors in milliseconds; from chatbots that perform administrative tasks like setting up an appointment to chatbots that empathize with human emotion and manage mental anxiety; AI in digital health has evolved by leaps and bounds.

In 2020, we will continue to see AI and ML push boundaries, while at the same time mature and settle into more defined patterns.

With the adoption of technologies like FaceID, facial recognition technology will be an important player in privacy and security intimate concerns of the healthcare field. It can be leveraged to drastically simplify the security requirements that make multi-factor authentication a time-consuming process for healthcare professionals on average, doctors spend 52hours a year just logging in to EHR systems. On the patient end, this same technology has the ability to detect emotional states of patients and anticipate needs based upon them, and the success of startups like Affectiva, the brainchild of MIT graduates, shows the tremendous promise of deep learning for these patient needs.

Then theres the tremendous capability of AI to accumulate massive amounts of data from monitoring systems, only matched by its ability to process and analyze this data. Were going to see AI play a major role in developing predictive algorithms to improve clinical interventions and mediate hospital readmissions.

Meanwhile, FDA-approved innovations from Microsoft and others claim the ability of computer vision for assisting radiologists and pathologists in identifying tumors and abnormalities in the heart. While robotic primary care is a long way off, some view AI as a rival to more niche clinical positions.

The progress and traction of AI and ML raise lots of questions: can algorithms predict risk of sepsis better than trained ICU clinicians? Can computer vision replace the work of the radiologist and pathologist? And even if that is to be the case, will consumers have difficulty buying into the power and promise of AI? The answers seem to rest in the industry working with stakeholders and policy-makers to develop the right frameworks for monitoring and regulating the use of AI.

2019 witnessed the fallout of the Cambridge Analytica scandal, and added several high profile data concerns of its own: Amazon workers paid to listen to Alexa recordings, for example, and the transfer of non-deidentified, personal health data of more than 50 million Americans to Google.

As the current generation fueled by smartphones, smart speakers, smart homes, smart everything wakes up to the serious challenges to privacy that these technological efficiencies are potentially introducing, theyre educating themselves about data sharing and becoming more cautious about the information that they are potentially sharing with third-party sites.

For companies that deal with special categories of sensitive data like medical information the stakes are much higher. Access to information such as mental health, sex life, family planning, history of disease, physical wellness, etc. could potentially jeopardize users job opportunities and promotions, and may even engender or perpetuate discrimination in the workplace.

In 2020, look for digital healthcare to establish increasingly tight security, clearly communicate privacy policies and provide more transparency around data use.

Interoperability is a major player in health tech innovation: patients will always receive care across multiple venues, and secure data exchange is key to providing continuity of care. Standardized APIs can provide the technological foundations for data sharing, extending the functionality of EHRs and other technologies that support connected care. Platforms like Validic Inform leverage APIs to share patient-generated data from personal health devices to providers, while giving them the ability to configure data streams to identify actionable data and automate triggers.

In the upcoming year, look for major players like Apple and Google to make strides toward interoperability and breaking down data silos. Apples Health app already is capable of populating with information from other apps on your phone. Add your calorie intake to a weight loss app? Time your miles with a running app? Monitor your bedtime habits with a sleep tracking app? Youll find that info aggregating in your Health app.

Apple is uniquely positioned to be the driver of interoperability, and Google is not far behind. They have a secure and established platform, trustworthy for the passage of encrypted data (such as patient portals), and command a brand loyalty ubiquitous in the US and elsewhere, not to mention pre-established relationships with the hospitals that are critical to making any true strides in that direction. Its a position that Apple has deliberately cultivated: as smartphone innovation falls into stalemate, theyre reaching toward bigger horizons in Tim Cooks words, improving health will be Apples greatest contribution to mankind.

These trends in digital health are not new. As with any innovations in healthcare, the process is slow and the cost of the payoff hotly debated, yet it is no longer a question of if, but when these innovations will start optimizing care, whether we like it or not.

See the rest here:

Opinion: AI, privacy and APIs will mould digital health in 2020 - MobiHealthNews

Posted in Ai

AI creativity will bloom in 2020, all thanks to true web machine learning – The Next Web

Machine learning has been trotted out as a trend to watch for many years now. But theres good reason to talk about it in the context of 2020. And thats thanks to developments like TensorFlow.js: an end-to-end open source machine learning library that is capable of, among other features, running pre-trained AI directly in a web browser.

Why the excitement? It means that AI is becoming a more fully integrated part of the web; a seemingly small and geeky detail that could have far reaching consequences.

Sure, weve already got examples a plenty of web tools that use AI: speech recognition, sentiment analysis, image recognition, and natural language processing are no longer earth-shatteringly new. But these tools generally offload the machine learning task to a server, wait for it to compute and then send back the results.

Thats fine and dandy for tasks that can forgive small delays (you know the scenario: you type a text in English, then patiently wait a second or two to get it translated into another language). But this browser-to-server-to-browser latency is the kiss of death for more intricate and creative applications.

Face-based AR lenses, for example, need to instantaneously and continually track the users face, making any delay an absolute no-go. But latency is also a major pain in simpler applications too.

Not so long ago, I tried to develop a web-app that, through a phones back-facing camera, was constantly on the lookout for a logo; the idea being that when the AI recognizes the logo, the site unlocks. Simple, right? Youd think so. But even this seemingly straight-forward task meant constantly taking camera snapshots and posting them to servers so that the AI could recognize the logo.

The task had to be completed at breakneck speed so that the logo was never missed when the users phone moved. This resulted in tens of kilobytes being uploaded from the users phone every two seconds. A complete waste of bandwidth and a total performance killer.

But because TensorFlow.js brings TensorFlows server-side AI solution directly into the web, if I were to build this project today, I could run a pre-trained model that lets the AI recognize the given logo in the users phone browser. No data upload needed and detection could run a couple times per second, not a painful once every two seconds.

The more complex and interesting the machine learning application, the closer to zero latency we need to be. So with the latency-removing TensorFlow.js, AIs creative canvas suddenly widens; something beautifully demonstrated by the Experiments with Google initiative. Its human skeleton tracking and emoji scavenger hunt projects show how developers can get much more inventive when machine learning becomes a properly integrated part of the web.

The skeleton tracking is especially interesting. Not only does it provide an inexpensive alternative to Microsoft Kinect, it also brings it directly onto the web. We could even go as far as developing a physical installation that reacts to movement using web technologies and a standard webcam.

The emoji scavenger hunt, on the other hand, shows how mobile websites running TensorFlow.js can suddenly become aware of the phones user context: where they are, what they see in front of them. So it can contextualize the information displayed as a result.

This potentially has far-reaching cultural implications too. Why? Because people will soon begin to understand mobile websites more as assistants than mere data providers. Its a trend that started with Google Assistant and Siri-enabled mobile devices.

But now, thanks to true web AI, this propensity to see mobiles as assistants will become fully entrenched once websites especially mobile websites start performing instantaneous machine learning. It could trigger a societal change in perception, where people will expect websites to provide utter relevance for any given moment, but with minimal intervention and instruction.

Hypothetically speaking, we could also use true web AI to develop websites that adapt to peoples ways of using them. By combining TensorFlow.js with the Web Storage API, a website could gradually personalize its color palette to appeal more to each users preferences. The sites layout could be adjusted to be more useful. Even its contents could be tweaked to better suit each individuals needs. And all on the fly.

Or imagine a mobile retail website that watches the users environment through the camera and then adjusts its offering to match the users situation? Or what about creative web campaigns that analyze your voice, like Googles Freddie Meter?

With all these tantalizing possibilities on the brink of becoming a reality, its a pity weve had to wait so long for a proper web-side machine learning solution. Then again, it was this insufficient AI performance on mobile devices that encouraged TensorFlows (as in server-side TensorFlow the .js versions predecessor) product development into being a truly integrated part of the web. And now that we finally have the gift of true web machine learning, 2020 could well be the year that developers unleash their AI creativity.

Go here to read the rest:

AI creativity will bloom in 2020, all thanks to true web machine learning - The Next Web

Posted in Ai

US Restricts Export of AI Related to Geospatial Imagery – Tom’s Hardware

The U.S. Bureau of Industry and Security announced yesterday that it would restrict the export of artificial intelligence-related technologies beginning January 6. That might seem like bad news for the American tech industry, but it's actually not as bad as it could've been, because right now the restrictions only apply to geospatial imagery.

Those restrictions won't prohibit U.S. tech companies from exporting AI products related to geospatial imagery outright. The rules allow for the export of such tech to Canada, for example, and companies can apply for licenses to export their wares to other countries. There's just no guarantee those licenses will be granted.

James Lewis, from the Center for Strategic and International Studies think tank, told Reuters that the Bureau of Industry and Security essentially wants "to keep American companies from helping the Chinese make better AI products that can help their military." He said the U.S. fears the possibility of AI-controlled targeting systems.

The restrictions essentially just give the U.S. government more control over certain technologies that could give other countries a military advantage. While some companies might rankle under those restrictions--especially if their shareholders aren't pleased--it's not uncommon for governments to enforce these kinds of rules.

Things could have been much worse. AI has become a central part of many services, and it's possible to use AI on nearly any kind of hardware if you're patient enough, so broader restrictions could've made problems for much of the industry. Instead, the U.S. government introduced a narrow rule that applies to specific tech.

But that might not always be the case. Reuters reported in December that the U.S. was considering other rules that would also limit the export of technologies related to quantum computing, Gate-All-Around Field Effect transistor tech, 3D-printing and chemical weapons. (Which, again, isn't that surprising.) More on that here.

Read more from the original source:

US Restricts Export of AI Related to Geospatial Imagery - Tom's Hardware

Posted in Ai

4 Steps To Shape Your Business With AI – Forbes

While artificial intelligence (AI) has been around for many years, deployment has been picking up. Between 2017 and 2018, consulting firm McKinsey & Co. foundthe percentage of companiesembedding at least one AI capability in their business processes more than doubled to 47 percent from 20 percent the year before.

Although companies are adopting it, they often lack a clear plan: A recent IDC survey found that of the companies already using AI, only 25 percent had an enterprise-wide strategy on how to implement it.

To help navigate that challenge, heres how the four pillars of Google Clouds Deployed AI vision can reshape your business.

For AI to be deployed effectively, it must be focused on a new business problem or unrealized opportunity. To that end, there are several areas of business that the technology is well-suited to address.

One key problem is fixing aging processes, notes Ritu Jyoti, program vice president of Artificial Intelligence Strategies for research firm IDC.

Companies that have been around for a long time will have a lot of archaic processes, and they need to be upgraded, Jyoti says.

Bank fraud is one prominent example. Machine learning (ML), a subset of AI, provides an opportunity to solve this problemby helping banks sort through large amounts of bank transactions to detect suspicious patterns of financial activity.

Customer relationships is another area AI can improve. Chatbots, for example, are enhancing customer service by providing support 24/7, Jyoti says. Furthermore, companies can also use AI to develop the right incentives for customers without losing money. Firms sometimes lose income due to lapsed contracts or stuck deals in which a transaction is started but unable to be completed, Jyoti notes.

This feature helps organizations optimize early payment discount offers by using ML algorithms to find the right balance between incentivizing customers while ensuring profitability for the seller, Jyoti says.

Another business problem in which AI can help is in document processing, including insurance claims, tax returns and mortgage applications, which can involve hundreds of pages of documents on income and assets, notes Vinod Valloppillil, Googles head of product for Google Cloud Language AI, who spoke at Forbes CIO Next conference.

[Document processing] is one of the few domains that actually brings in multiple parts of AI all simultaneously, Valloppillil says. It incorporates computer vision, deep learning and natural language processing.

When deploying AI to solve a business problem, the technology should be central to that solution. Many examples across industriesincluding healthcare and energyexemplify how innovative problem-solving can hinge on AI.

The medical industry, for instance, is turning to AI to build algorithms to detect pneumonia. With genomic data bringing insights on who will be susceptible to various disease conditions, disease prevention is one area that cant be solved without AI. An AI platform can also become part of an end-to-end solution when hospitals need to connect medical data to cloud platforms.

AI can also help physicians determine whether a patient has diabetic retinopathy, Valloppillil says. An Explainable AI model would help determine if screening was necessary based on the appearance of various regions of the image.

Were getting to the point where AI can do quite a bit of engineering, Valloppillil says.Meanwhile, the energy sector has found AI to be essential to keeping wind facilities safer, faster, and more accurate, according to Andrs Gluski, president and CEO of AES, a global power company. Drones and Cloud AutoML Vision, a platform that provides advanced visual intelligence and custom ML models, make these improvements in wind energy possible.

Once youve identified the business problem and decided to use AI in the solution, the next step is building customer trust and maintaining proper ethics. In a Deloitte survey of 1,100 IT and line-of-business executives, 32 percent placed ethical risks in the top three of AI-related concerns.

To build trust, ethics should come ahead of any productivity or financial gains from using AI. A crucial part of that is being transparent about how a company uses AI.Consulting firm Capgemini recommends using opt-in forms to help build transparency with customers regarding AI. Meanwhile, privacy laws like the European Unions General Data Protection Regulation (GDPR) also contribute to the transparency requirements for AI.

Jyoti also recommends including fact sheetssimilar to how food packaging includes nutrition informationabout details like data sources and lineage. A set of AI Principles from Google help businesses ensure that theyre using AI in a responsible manner and that they understand the limits of the technology. Users of AI should maintain accountability to human direction, uphold standards set by scientists and test for safety.

These are the principles we as a company orient around, things like always optimize around the fairness of AI, and try to avoid any situation where AI can get abused, Valloppillil says.

Finally, to ensure a cycle of improvement, companies should use clear, objective metrics to assess progress towards their business goals.

For example, if AI is used to assist with rsum screening in the hiring process, make sure the screening adheres to company policies on equal opportunity to maintain fairness. To form an objective metric, come up with representative numbers of candidates for various demographics and train ML algorithms accordingly.

Avoid the blinders of the homogenous teams, Jyoti says.Tools and frameworks like Explainable AI can help companies build inclusive systems that address bias, which involves the data not being representative of the decisions a business is trying to make. This makes the problem of garbage in, garbage out multiplied a hundredfold, Valloppillil says.

The concept of explainability helps provide insight into the decisions that AI helps deliver.

With explainability we're now finally getting to the point where we go peek inside the box, Valloppillil says. We actually have a shot at understanding exactly why does AI make the call.

As AI continues to evolve, there are increasing opportunities for the technology to meaningfully improve business operations. With ethical implications in mind and a clear focus on measurable metrics, Deployed AI is poised for growth.

Originally posted here:

4 Steps To Shape Your Business With AI - Forbes

Posted in Ai

The Future Of Work NowMedical Coding With AI – Forbes

Thomas H. Davenport and Steven Miller

The coding of medical diagnosis and treatment has always been a challenging issue. Translating a patients complex symptoms, and a clinicians efforts to address them, into a clear and unambiguous classification code was difficult even in simpler times. Now, however, hospitals and health insurance companies want very detailed information on what was wrong with a patient and the steps taken to treat them for clinical record-keeping, for hospital operations review and planning, and perhaps most importantly, for financial reimbursement purposes.

More Codes, More Complexity

The current international standard for medical coding is ICD-10 (the tenth version of International Classification of Disease codes), from the World Health Organization (WHO). ICD10 has over 14,000 codes for diagnoses. The next update to this international standard, ICD-11, has already been formally adopted by WHO member states in May 2019. WHO member states, including the US, will begin implementation of ICD-11 as of January 2022. The new ICD-11 has over 55,000 diagnostic codes, four times the number of diagnostic codes contained in the WHOs ICD-10.

DENVER, CO - NOVEMBER 25: The ICD-10 code for being burned due to water-skis on fire on the computer ... [+] of Dr. David Opperman at the Colorado Voice Clinic on November 25, 2015 in Denver, Colorado. Opperman like many other doctors are having to deal with the new 69,000 diagnostic codes to describe issues the aliments of their patients including burned while water skiing and injured in spacecraft collision. (Photo by Brent Lewis/The Denver Post via Getty Images)

In fact, there are even substantially more codes than the numbers given above, at least in the United States.An enhanced version of IDC-10 that is specific to usage in the United States has about 140,000 classification codes, about 70,000 for diagnosis, and another 70,000 codes for classifying treatments. We expect the enhanced version of IDC-11 that will be specific to usage in the US to have at least several times number of codes in the WHO version of IDC-11 given that the US version also includes treatment codes and has previously included a larger number of diagnostic codes as well.

No human being can remember all the codes for diseases and treatments, especially as the number of codes has climbed over the decades to tens of thousands. For decades, medical coders have relied on code books to look up the right code for classifying a disease or treatment. Thumbing through a reference book of codes obviously slowed down the process. And it is not just a matter of finding the right code. There are interpretation issues. With ICD-10 and prior versions of the classification scheme, there is often more than one way to code a diagnosis or treatment, and the medical coder has to decide on the most appropriate choices.

Over the past 20 years, the usage of computer-assisted coding systems has steadily increased across the healthcare industry as a means of coping with the increasing complexity of coding diagnosis and treatments. More recent versions of computer-assisted coding systems have incorporated state-of-the-art machine learning methods and other aspects of artificial intelligence to enhance the systems ability to analyze the clinical documentationcharts and notesand determine which codes are relevant to a particular case. Some medical coders are now working hand-in-hand with AI-enhanced computer-assisted coding systems to identify and validate the correct codes.

Elcilene Moseley and AI-Assisted Coding

Elcilene Moseley lives in Florida and is an 11-year veteran medical coder. She previously worked for a company that owned multiple hospitals, but she now works for a coding services vendor that has a contract for coding in the same hospitals Moseley used to work for. She does her work from home, generally working for eight hours to do a certain number of patient charts per day. She specializes in outpatient therapies, often including outpatient surgeries.

Moseley is acutely aware of the increased complexity of coding and is a big supporter of the AI-enhanced com coding systemdeveloped by her employerthat suggests codes for her to review. Its gotten so detailedright side, left side, fracture displaced or nottheres no way I can remember everything. However, she notes, AI only goes so far. For example, the system may process the text in a chart document, note that the patient has congestive heart failure, and select that disease as a code for diagnosis and reimbursement. But that particular diagnosis is in the patients history, not what he or she is being treated for now. Sometimes Im amazed at how accurate the systems coding is, she says. But sometimes it makes no sense.

When Moseley opens a chart, on the left side of each page there are codes with pointers to where the code came from in the chart report. Some coders dont bother to read the patient chart from beginning to end, but Moseley believes its important to do so. Maybe I am a little old fashioned, she admits, but its more accurate when I read it. She acknowledges that the system makes you faster, but it can also make you a little lazy.

Some patient cases are relatively simple to code, others more complex. If its just an appendectomy for a healthy patient, Moseley says, I can check all the codes and get through it in five minutes. This is despite multiple sections on a chart for even a simple surgery, including patient physical examination, anesthesiology, pathology, etc. On the other hand, she notes,

If its a surgery on a 75-year-old man with end-stage kidney disease, diabetes, and cancer, I have to code their medical history, what meds they are takingit takes much longer. And the medical history codes are important because if the patient has multiple diagnoses, it means the physician is spending more time. Those evaluation and management codes are important for correctly reimbursing the physician and the hospital.

Moseley and other coders are held to a 95% coding quality standard, and their work is audited every three months to ensure they meet it.

When Moseley first began to use AI-enhanced coding a couple of years ago, she was suspicious of it because she thought it might put her out of a job. Now, however, she believes that will never happen and human coders will always be necessary. She notes that medical coding is so complex and there are so many variables, and so many specific circumstances. Due to this complexity, she believes that coding will never be fully automated. She has effectively become a code supervisor and auditorchecking the codes the system assigns and verifying if system recommendations are appropriate for the specific case. In her view, all coders will eventually transition to roles of auditor and supervisor of the AI-enabled coding system. The AI system simply makes coders too productive to not use it.

Educating Coders

Moseley has a two-year Associate of Science in Medical Billing and Coding degree. In addition, she holds several different coding certificationsfor general coding and in her specialty fields like emergency medicine. Keeping the certifications active requires regular continuing education units and tests.

Not all coders, however, have this much training. Moseley says that there are lots of sketchy schools that offer online training in medical coding. They often overpromise about the prospects of a lucrative jobwith an up to $100,000 annual salaryif a student takes a coding course for six months. Working from home is another appealing aspect of these jobs.

The problem is that hospitals and coding services firms want experienced coders, not entry-level hires with inadequate training. The more straightforward and simpler coding decisions are made by AI; more complex coding decisions and audits require experts. The newbies may be certified, Moseley says, but without prior experience they have a difficult time getting jobs. It would require too much on-the-job training by their employers to make them effective. The two professional associations for medical coding, AAPC (originally the American Academy of Professional Coders) and AHIMA (American Health Information Management Association), both have Facebook pages for their members to discuss issues in the coding field. Moseley says they are replete with complaints about the inability to find the entry-level jobs promised by the coding schools.

For Elcilene Moseley, however, codingespecially with the help of AIis a good job. She finds it interesting and relatively well-paid. Her work at homeat any hour of the day or nightprovides her with a high level of flexibility. If she didnt like her current position, she is constantly approached by headhunters about other coding jobs. Moseley argues that the only medical coders who are suffering from the use of AI are those at the entry level and those who refuse to learn the new skills to work with a smart machine.

Steven Miller is a Professor of Information Systems and Vice Provost for Research at Singapore Management University.

View post:

The Future Of Work NowMedical Coding With AI - Forbes

Posted in Ai

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India

The insurance sector has fallen behind the curve of financial services innovation - and that's left hundreds of billions in potential cost savings on the table.

The most valuable area in which insurers can innovate is the use of artificial intelligence (AI): It's estimated that AI can drive cost savings of $390 billion across insurers' front, middle, and back offices by 2030, according to a report by Autonomous NEXT seen by Business Insider Intelligence. The front office is the most lucrative area to target for AI-driven cost savings, with $168 billion up for grabs by 2030.

In the AI in Insurance Report, Business Insider Intelligence will examine AI solutions across key areas of the front office - customer service, personalization, and claims management - to illustrate how the technology can significantly enhance the customer experience and cut costs along the value chain. We will look at companies that have accomplished these goals to illustrate what insurers should focus on when implementing AI, and offer recommendations on how to ensure successful AI adoption.

The companies mentioned in this report are: IBM, Lemonade, Lloyd's of London, Next Insurance, Planck, PolicyPal, Root, Tractable, and Zurich Insurance Group.

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

See the original post:

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis - Business Insider India

Posted in Ai

Samsung and LG go head to head with AI-powered fridges that recognize food – The Verge

Get ready for a smart fridge showdown at CES 2020, because Samsung and LG will both be unveiling fridges with added artificial intelligence capabilities this year. Samsungs latest edition of its Family Hub refrigerator and LGs second-generation InstaView ThinQ fridge both tout AI-equipped cameras that can identify food. The idea is that the cameras can scan whats inside and let users know what items theyre short on, even making meal suggestions based on the ingredients they still have.

Samsungs Family Hub smart fridge was first unveiled at CES 2016, and since then, the company has been rolling out updated iterations with Bixby support, SmartThings integration, and AKG speakers. The latest edition adds software upgrades to enable AI image recognition in its View Inside cameras.

Before, the cameras let users see whats in their fridges from their smartphones, a useful feature if you happen to be out grocery shopping and cant remember what you need to stock up on. With the AI-enabled updates, Family Hub will supposedly make these recommendations for you on its own, identifying which ingredients youre low on. Though its to be determined how well the image recognition will work for example, how will it deal with ingredients stored in tubs of Tupperware?

The software upgrades also include improved meal planning with the help of Whisk, a food tech startup Samsung acquired last year. Whisk lets users plan meals for up to a week and then creates smart shopping lists using ingredients that apply to multiple recipes.

Finally, the huge built-in touchscreen that can be used as a virtual bulletin board can now support video clips, as well as mirror content from Samsung TVs and phones. That means you can watch vertical videos like IGTV on your Samsung fridge, as God intended.

LG is showing off two models of its InstaView fridges, both of which feature a 22-inch display that can turn transparent to let users see whats inside without opening the door and letting the cold air out. Theres the AI-equipped InstaView ThinQ and the InstaView with Craft Ice, which makes fancy, two-inch spherical ice balls. Those are supposed to melt slower than regular ice, if thats a problem that you have. The InstaView with Craft Ice was released in the US last year, but will now be available in more markets.

Theres no pricing information yet, but based on the prices for LG and Samsungs previous fridge models, customers can expect prices to range from $4,500 to $6,000. Samsung says its Family Hub updates will be available in the spring.

Im not opposed to the idea of a huge Wi-Fi-connected touchscreen on a fridge in fact, it seems like a genuinely useful way to look up recipes or display cute photos and videos. Im skeptical how well the AI will identify different ingredients, and whether using a computer to see what items youre low on is really better than just taking a look for yourself.

See the original post here:

Samsung and LG go head to head with AI-powered fridges that recognize food - The Verge

Posted in Ai

Nepal should gamble on AI – The Phnom Penh Post

A statue of Beethoven by German artist Ottmar Hoerl stands in front of a piano during a presentation of a part of the completion of Beethovens 10th symphony made using artificial intelligence at the Telekom headquarters in Bonn, western Germany. AFP

Artifical intelligence (AI) is an essential part of the fourth industrial revolution, and Nepal can still catch up to global developments.

Although unrelated, the last decade has seen two significant events: Nepal promulgated a new constitution after decades of instability and is now aiming for prosperity. At the same time, AI saw a resurgence through deep learning, impacting a wide variety of fields. Though unrelated, one can help the other AI can help Nepal in its quest for development and prosperity.

AI was conceptualised during the 1950s, and have seen various phases. The concept caught the publics imagination and hundreds, if not thousands, of movies and novels were created based on a similar idea of a machines intelligence being on par with humans.

But human intelligence is a very complex phenomenon and is diverse in its abilities like rationalisation or recognising a persons face. Even the seemingly simple task of recognising faces, when captured at different camera angles, was considered a difficult challenge for AI as late as the first decade of this century.

However, thanks to better algorithms, computation capabilities, and loads of data from the magic of the internet and social media giants, the current AI systems are now capable of performing such facial recognition tasks better than humans. Some other exciting landmarks include surpassing trained medical doctors in diagnosing diseases from X-ray images and a self-taught AI beating professionals in the strategy board game Go. Although AI may be still far away from general human intelligence, these examples should be more than enough to start paying attention to the technology.

The current leap in AI is now considered an essential ingredient for the fourth industrial revolution. The first industrial revolution, via the steam engine, that started in Great Britain during the late 1700s and quickly expanded to other European countries and America led to rapid economic prosperity. This further opened floodgates of innovation and wealth creation leading to the second and third industrial revolution. A case study of this could be the relationship between Nokia and Finland.

Both of them failing miserably in economic terms in the late 1980s. But both the company and the country gambled on GSM technology, which later went on to become the worlds dominant network standard. In a single decade that followed, Finland achieved unprecedented economic growth with Nokia accounting for more than 70 per cent of Helsinkis stock exchange market capital. This single decade transformed Finland into the most specialised countries in terms of information and communication despite it being under a severe economic crisis since World War II.

The gamble involved not just the motivation to support new technology, but a substantial investment through the Finnish Funding Agency for Technology and Innovation into Nokias research and development projects. This funding was later returned in the form of colossal tax revenue, employment opportunities and further demand for skilled human resources. All these resulted in an ecosystem with a better educational system and entrepreneurial opportunities.

Owing to the years of political turmoil and instability, Nepal missed out on these past industrial revolutions. But overlooking the current one might leave us far behind.

Global AI phenomenon

A recent study of the global AI phenomenon has shown that developed countries have invested heavily in talent and the associated market and have already started to see a direct contribution from AI in their economy. Some African countries are making sure that they are not being left behind, with proper investment in AI research and development. AI growth in Africa has seen applications in the area of agriculture and healthcare. Google, positioning itself to be an AI-first company, has caught this trend in Africa and opened its first African AI lab in Accra, Ghana.

So is Nepal too late to this party? Perhaps. But Nepal still has a chance of catching up. Instead of scattering its focus and the available resources, the country now needs to narrow its investments into AI and technology. It will all start with the central government beginning with a concrete plan for AI development for the upcoming decade.

Similar policies have already been released by many other countries, including its neighbours India and China. It is unfortunate to note that AI strategy from China, reported in the 19th Congress by Chinese President Xi Jinping in 2017, received close to no attention in Nepal, in comparison to the Belt and Road Initiative (BRI) strategy that was announced in 2013.

An essential component of such a strategic plan should be on enhancing Nepals academic institutions. Fortunately, any such programme from the government could be facilitated by recent initiatives like Naami Nepal, a research organisation for AI or NIC Nepal, an innovation centre started by Mahabir Pun.

Moreover, thanks to the private sector, Nepal has also begun to see AI-based companies like Fuse machines or Paaila Technologies that are attempting to close the gap. It has now become necessary to leverage AI for inclusive economic growth to fulfil our dreams of prosperity.


View post:

Nepal should gamble on AI - The Phnom Penh Post

Posted in Ai

Passengers threaten to open cockpit door on AI flight; DGCA seeks action – Times of India

NEW DELHI: The Directorate General of Civil Aviation has asked Air India to act against unruly passengers who banged on the cockpit door and misbehaved with crew of a delayed Delhi-Mumbai flight on Thursday (Jan 2).

While threatening to break open the door, some passengers had asked the Boeing 747s pilots to come out of the cockpit and explain the situation a few hours after the jumbo jet had returned to the bay at IGI Airport from the runway due to a technical snag. AI is yet to take a call on the issue of beginning proceedings under the strict no fly list against the unruly passengers of this flight.

DGCA chief Arun Kumar said: "We have asked the airline to act against the unruly behaviour."

AI spokesman Dhananjay Kumar said: "A video of few passengers of AI 865 of January 2 is being widely circulated in different forums.That flight was considerably delayed due to technical reasons. AI management has asked the operating crew for a detailed report on the reported misbehaviour by some passengers. Further action would be considered after getting the report."

The 24-year-old B747 (VT-EVA) was to operate as AI 865 at 10.10 am on Thursday. "Passengers had boarded the flight by 9.15 am. The aircraft taxied out at 10 am and returned from the taxiway in about 10 minutes. Attempts were made to rectify the snag. Finally passengers were asked to alight from this ane at about 2.20 pm and sent to Mumbai by another aircraft at 6 pm Thursday," said an AI official. So there was a delay of 8 hours in the passengers taking off for their destination.

While airlines should do their best to minimise passenger woes during flight delays, unruly behaviour by flyers targeting crew is unacceptable globally. India also now has a no fly list where disruptive passengers can be barred from flying for upto a lifetime depending on the gravity of their unruly behaviour.

Problems on board the B747 (VT-EVA) named Agra began when passengers got restive after waiting for more than a couple of hours for the snag to be rectified.

Videos have emerged showing some young passengers banging on the cockpit door, asking the pilots to come out. Captain please one out Loser come out Come out or we will break the door they yell to the cockpit crew. Cockpit is on the upper deck of B747s where AI has its business class.

See more here:

Passengers threaten to open cockpit door on AI flight; DGCA seeks action - Times of India

Posted in Ai

Watch this AI goalie psych out its opponent in the most hilarious way – Science Magazine

By Matthew HutsonDec. 26, 2019 , 10:00 AM

VANCOUVER, CANADANo, the red player in the video above isnt having a seizure. And the blue player isnt drunk. Instead, youre watching what happens when one artificial intelligence (AI) gets the better of the other, simply by behaving in an unexpected way.

One way to make AI smarter is to have it learn from its environment. Cars of the future, for example, will be better at reading street signs and avoiding pedestrians as they gain more experience. But hackers can exploit these systems with adversarial attacks: By subtly and precisely modifying an image, say, you can fool an AI into misidentifying it. A stop sign with a few stickers on it might be seen as a speed limit sign, for example. The new study reveals AI can be fooled into not only seeing something it shouldnt, but also into behaving in a way it shouldnt.

The study takes place in the world of simulated sports: soccer, sumo wrestling, and a game where a person stops a runner from crossing a line. Typically, both competitors train by playing against each other. Here, the red bot trains against an already expert blue bot. But instead of letting the blue bot continue to learn, the red bot hacks the system, falling down and not playing the game as it should. As a result, the blue bot begins to playterribly, wobbling to and fro like a drunk pirate, and losing up to twice as many games as it should, according to research presented here this month at the Neural Information Processing Systems conference.

Imagine that drivers tend to handle their car a certain way just before changing lanes. If an autonomous vehicle (AV) were to use reinforcement learning, it might depend on this regularity and swerve in response not to the lane changing, but to the handling correlated with lane changing. An adversarial AV might then learn that the victim AV responds in this way, and use that against it. So, all it has to do is handle itself in that subtle, particular way associated with lane changing, and the victim AV will swerve out of the way.

Stock-trading algorithms that use reinforcement learning might also come to depend on exploitable cues.

See original here:

Watch this AI goalie psych out its opponent in the most hilarious way - Science Magazine

Posted in Ai

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops – Nextgov

The Defense Departments central artificial intelligence development effort wants to build an artificial intelligence-powered drone swarm capable of independently identifying and tracking targets, and maybe even saving lives.

The Pentagons Joint Artificial Intelligence Center, or JAIC, issued a request for information to find out if AI developers and drone swarm builders can come together to support search and rescue missions.

Search and rescue operations are covered under one of the four core JAIC research areas: humanitarian aid and disaster relief. The program also works on AI solutions for predictive maintenance, cyberspace operations and robotic process automation.

The goal for the RFI is to discover whether industry can deliver a full-stack search and rescue drone swarm that can self-pilot, detect humans and other targets and stream data and video back to a central location. The potential solicitation would also look for companies or teams that can provide algorithms, machine training processes and data to supplement that provided by the government.

The ideal result would be a contract with several vendors that together could provide the capability to fly to a predetermined location/area, find people and manmade objectsthrough onboard edge processingand cue analysts to look at detections sent via a datalink to a control station, according to the RFI. Sensors shall be able to stream full motion video to an analyst station during the day or night; though, the system will not normally be streaming as the AI will be monitoring the imagery instead of a person.

The system has to have enough edge processing power to enable the AI to fly, detect and monitor without any human intervention, while also being able to stream live video to an operator and allow that human to take control of the drones, if needed.

The RFI contains a number of must-haves, including:

The RFI also notes all training data will be government-owned and classified. All development work will be done using government-owned data and on secure government systems.

Responses to the RFI are due by 11 a.m. Jan. 20.

See the rest here:

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops - Nextgov

Posted in Ai