History of artificial intelligence – Wikipedia

The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually it became obvious that they had grossly underestimated the difficulty of the project due to computer hardware limitations. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter". Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware. As in previous "AI summers", some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.

The dream of artificial intelligence was first thought of in Indian philosophies like those of Charvaka, a famous philosophy tradition dating back to 1500 BCE and surviving documents around 600 BCE. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[4]In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[5]By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots),and speculation, such as Samuel Butler's "Darwin among the Machines."AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[8]Hero of Alexandria,[9]Al-Jazari, Pierre Jaquet-Droz,and Wolfgang von Kempelen.[11]The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."[12][13]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor "formal"reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.[14]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18]Hobbes famously wrote in Leviathan: "reason is nothing but reckoning".[19]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate."[20]These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?"[14]His question was answered by Gdel's incompleteness proof, Turing's machine and Church's Lambda calculus.[14][21]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[14][23]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific pieces of music of any degree of complexity or extent".[24] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[25] and developed by John von Neumann.[26]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[27]

Examples of work in this vein includes robots such as W. Grey Walter's turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[28]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[29] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[30]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31]He noted that "thinking" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition.[32] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[33] Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[34] Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[35]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[36]Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[37](This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)[38]

The Dartmouth Conference of 1956[39]was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".[40]The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[41]At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field.[42]The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing":[44] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called "reasoning as search".[48]

The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.[49]

Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver".[50] Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.[53]

A semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[54] and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory.[55]

Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[56]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[57]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[58]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[63]DARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[64] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[65]These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[66]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them.[67] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[68] but this "hands off" approach would not last.

In Japan, Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world's first full-scale intelligent humanoid robot,[69][70] or android. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.[71][72][73]

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[74] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons.[75]Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[76]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys".[77] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[78]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[86]In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country.[87](The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.)[88]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[89]By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration."[90]However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[91]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[92] Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how".[93][94] John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking".[95]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored."[96] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."[97] Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being."[98]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct psychotherapeutic dialogue" based on ELIZA.[99] Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[100]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[75]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[101]In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[102] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[103]Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[104]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[105]McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[106]

Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[107] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[108]

In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[109] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.

In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[110]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[111]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[112] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[113]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,"[114] writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay".[115] Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[116]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[117]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for Deep Blue.[118]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[119] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[120]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology.[121][122] DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[123]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called "backpropagation" (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[122][124]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[122][125]

The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[126] Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[127]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[128]

In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally." New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.[129]

By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2010.[130] As with other AI projects, expectations had run much higher than what was actually possible.[130]

In the late 1980s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[131] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox). They advocated building intelligence "from the bottom up."[132]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.)[133]

In a 1990 paper, "Elephants Don't Play Chess,"[134] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."[135] In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[136]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence".[137] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[138] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[139]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[140] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[141] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[142]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[143] In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[144] This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome.

A new paradigm called "intelligent agents" became widely accepted during the 1990s.[145] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[146] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI.[147] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[148]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[147][149]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[150] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Russell & Norvig (2003) describe this as nothing less than a "revolution" and "the victory of the neats".[151][152]

Judea Pearl's highly influential 1988 book[153] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.[151]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[154]and their solutions proved to be useful throughout the technology industry,[155] such asdata mining,industrial robotics,logistics,[156]speech recognition,[157]banking software,[158]medical diagnosis[158]and Google's search engine.[159]

The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[160] Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[161]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[162][163][164]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[165]

In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[166] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[167] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[168] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[169] There were many other explanations and for each there was a corresponding research program underway.

In the first decades of the 21st century, access to large amounts of data (known as "big data"), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".

By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy".[170] The applications of big data began to reach into other fields as well, such as training models in ecology[171] and for various applications in economics.[172] Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[173]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[173] According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid.[174] As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts.

However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[175]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a FPS, has sparked some controversy).[176][177][178][179]

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models. In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke, big data means that instead of random analysis (sample survey), all data is used for analysis. The 5V characteristics of big data (proposed by IBM): Volume, Velocity, Variety[180], Value[181], Veracity[182].The strategic significance of big data technology is not to master huge data information, but to specialize in these meaningful data. In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the Process capability of the data and realize the Value added of the data through Processing.

Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, Natural language processing and expert systems. Since the birth of artificial intelligence, the theory and technology have become more and more mature, and the application fields have been expanding. It is conceivable that the technological products brought by artificial intelligence in the future will be the "container" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can be like human thinking, and it may exceed human intelligence.Artificial general intelligence is also referred to as "strong AI",[183] "full AI"[184] or as the ability of a machine to perform "general intelligent action".[3] Academic sources reserve "strong AI" to refer to machines capable of experiencing consciousness.

.

Continue reading here:

History of artificial intelligence - Wikipedia

Artificial Intelligence | Internet Encyclopedia of Philosophy

Artificial intelligence (AI) would be the possession of intelligence, or the exercise of thought, by machines such as computers. Philosophically, the main AI question is "Can there be such?" or, as Alan Turing put it, "Can a machine think?" What makes this a philosophical and not just a scientific and technical question is the scientific recalcitrance of the concept of intelligence or thought and its moral, religious, and legal significance. In European and other traditions, moral and legal standing depend not just on what is outwardly done but also on inward states of mind. Only rational individuals have standing as moral agents and status as moral patients subject to certain harms, such as being betrayed. Only sentient individuals are subject to certain other harms, such as pain and suffering. Since computers give every outward appearance of performing intellectual tasks, the question arises: "Are they really thinking?" And if they are really thinking, are they not, then, owed similar rights to rational human beings? Many fictional explorations of AI in literature and film explore these very questions.

A complication arises if humans are animals and if animals are themselves machines, as scientific biology supposes. Still, "we wish to exclude from the machines in question men born in the usual manner" (Alan Turing), or even in unusual manners such asin vitro fertilization or ectogenesis. And if nonhuman animals think, we wish to exclude them from the machines, too. More particularly, the AI thesis should be understood to hold that thought, or intelligence, can be produced by artificial means; made, not grown. For brevitys sake, we will take machine to denote just the artificial ones. Since the present interest in thinking machines has been aroused by a particular kind of machine, an electronic computer or digital computer, present controversies regarding claims of artificial intelligence center on these.

Accordingly, the scientific discipline and engineering enterprise of AI has been characterized as the attempt to discover and implement the computational means to make machines behave in ways that would be called intelligent if a human were so behaving (John McCarthy), or to make them do things that would require intelligence if done by men" (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: thats the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.

Intelligence might be styled the capacity to think extensively and well. Thinking well centrally involves apt conception, true representation, and correct reasoning. Quickness is generally counted a further cognitive virtue. The extent or breadth of a things thinking concerns the variety of content it can conceive, and the variety of thought processes it deploys. Roughly, the more extensively a thing thinks, the higher the level (as is said) of its thinking. Consequently, we need to distinguish two different AI questions:

In Computer Science, work termed AI has traditionally focused on the high-level problem; on imparting high-level abilities to use language, form abstractions and concepts and to solve kinds of problems now reserved for humans (McCarthy et al. 1955); abilities to play intellectual games such as checkers (Samuel 1954) and chess (Deep Blue); to prove mathematical theorems (GPS); to apply expert knowledge to diagnose bacterial infections (MYCIN); and so forth. More recently there has arisen a humbler seeming conception "behavior-based" or nouvelle AI according to which seeking to endow embodied machines, or robots, with so much as insect level intelligence (Brooks 1991) counts as AI research. Where traditional human-level AI successes impart isolated high-level abilities to function in restricted domains, or microworlds, behavior-based AI seeks to impart coordinated low-level abilities to function in unrestricted real-world domains.

Still, to the extent that what is called thinking in us is paradigmatic for what thought is, the question of human level intelligence may arise anew at the foundations. Do insects think at all? And if insects what of bacteria level intelligence (Brooks 1991a)? Even "water flowing downhill," it seems, "tries to get to the bottom of the hill by ingeniouslyseeking the line of least resistance" (Searle 1989). Dont we have to draw the line somewhere? Perhaps seeming intelligence to really be intelligence has to come up to some threshold level.

Much as intentionality (aboutness or representation) is central to intelligence, felt qualities (so-called qualia) are crucial to sentience. Here, drawing on Aristotle, medieval thinkers distinguished between the passive intellect wherein the soul is affected, and the active intellect wherein the soul forms conceptions, draws inferences, makes judgments, and otherwise acts. Orthodoxy identified the soul proper (the immortal part) with the active rational element. Unfortunately, disagreement over how these two (qualitative-experiential and cognitive-intentional) factors relate is as rife as disagreement over what things think; and these disagreements are connected. Those who dismiss the seeming intelligence of computers because computers lack feelings seem to hold qualia to be necessary for intentionality. Those like Descartes, who dismiss the seeming sentience of nonhuman animals because he believed animals dont think, apparently hold intentionality to be necessary for qualia. Others deny one or both necessities, maintaining either the possibility of cognition absent qualia (as Christian orthodoxy, perhaps, would have the thought-processes of God, angels, and the saints in heaven to be), or maintaining the possibility of feeling absent cognition (as Aristotle grants the lower animals).

While we dont know what thought or intelligence is, essentially, and while were very far from agreed on what things do and dont have it, almost everyone agrees that humans think, and agrees with Descartes that our intelligence is amply manifest in our speech. Along these lines, Alan Turing suggested that if computers showed human level conversational abilities we should, by that, be amply assured of their intelligence. Turing proposed a specific conversational test for human-level intelligence, the Turing test it has come to be called. Turing himself characterizes this test in terms of an imitation game" (Turing 1950, p. 433) whose original version "is played by three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. The interrogator is allowed to put questions to A and B [by teletype to avoid visual and auditory clues]. . It is A's object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator." Turing continues, "We may now ask the question, `What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is being played like this as he does when the game is played between a man and a woman? These questions replace our original, `Can machines think?'" (Turing 1950) The test setup may be depicted this way:

This test may serve, as Turing notes, to test not just for shallow verbal dexterity, but for background knowledge and underlying reasoning ability as well, since interrogators may ask any question or pose any verbal challenge they choose. Regarding this test Turing famously predicted that "in about fifty years' time [by the year 2000] it will be possible to program computers ... to make them play the imitation game so well that an average interrogator will have no more than 70 per cent. chance of making the correct identification after five minutes of questioning" (Turing 1950); a prediction that has famously failed. As of the year 2000, machines at the Loebner Prize competition played the game so ill that the average interrogator had 100 percent chance of making the correct identification after five minutes of questioning (see Moor 2001).

It is important to recognize that Turing proposed his test as a qualifying test for human-level intelligence, not as a disqualifying test for intelligence per se (as Descartes had proposed); nor would it seem suitably disqualifying unless we are prepared (as Descartes was) to deny that any nonhuman animals possess any intelligence whatsoever. Even at the human level the test would seem not to be straightforwardly disqualifying: machines as smart as we (or even smarter) might still be unable to mimic us well enough to pass. So, from the failure of machines to pass this test, we can infer neither their complete lack of intelligence nor, that their thought is not up to the human level. Nevertheless, the manners of current machine failings clearly bespeak deficits of wisdom and wit, not just an inhuman style. Still, defenders of the Turing test claim we would have ample reason to deem them intelligent as intelligent as we are if they could pass this test.

The extent to which machines seem intelligent depends first, on whether the work they do is intellectual (for example, calculating sums) or manual (for example, cutting steaks): herein, an electronic calculator is a better candidate than an electric carving knife. A second factor is the extent to which the device is self-actuated (self-propelled, activated, and controlled), or autonomous: herein, an electronic calculator is a better candidate than an abacus. Computers are better candidates than calculators on both headings. Where traditional AI looks to increase computer intelligence quotients (so to speak), nouvelle AI focuses on enabling robot autonomy.

In the beginning, tools (for example, axes) were extensions of human physical powers; at first powered by human muscle; then by domesticated beasts and in situ forces of nature, such as water and wind. The steam engine put fire in their bellies; machines became self-propelled, endowed with vestiges of self-control (as by Watts 1788 centrifugal governor); and the rest is modern history. Meanwhile, automation of intellectual labor had begun. Blaise Pascal developed an early adding/subtracting machine, the Pascaline (circa 1642). Gottfried Leibniz added multiplication and division functions with his Stepped Reckoner (circa 1671). The first programmable device, however, plied fabric not numerals. The Jacquard loom developed (circa 1801) by Joseph-Marie Jacquard used a system of punched cards to automate the weaving of programmable patterns and designs: in one striking demonstration, the loom was programmed to weave a silk tapestry portrait of Jacquard himself.

In designs for his Analytical Engine mathematician/inventor Charles Babbage recognized (circa 1836) that the punched cards could control operations on symbols as readily as on silk; the cards could encode numerals and other symbolic data and, more importantly, instructions, including conditionally branching instructions, for numeric and other symbolic operations. Augusta Ada Lovelace (Babbages software engineer) grasped the import of these innovations: The bounds of arithmetic she writes, were ... outstepped the moment the idea of applying the [instruction] cards had occurred thus enabling mechanism to combine together with general symbols, in successions of unlimited variety and extent (Lovelace 1842). Babbage, Turing notes, had all the essential ideas (Turing 1950). Babbages Engine had he constructed it in all its steam powered cog-wheel driven glory would have been a programmable all-purpose device, the first digital computer.

Before automated computation became feasible with the advent of electronic computers in the mid twentieth century, Alan Turing laid the theoretical foundations of Computer Science by formulating with precision the link Lady Lovelace foresaw between the operations of matter and the abstract mental processes of themost abstract branch of mathematical sciences" (Lovelace 1942). Turing (1936-7) describes a type of machine (since known as a Turing machine) which would be capable of computing any possible algorithm, or performing any rote operation. Since Alonzo Church (1936) using recursive functions and Lambda-definable functions had identified the very same set of functions as rote or algorithmic as those calculable by Turing machines, this important and widely accepted identification is known as the Church-Turing Thesis (see, Turing 1936-7: Appendix). The machines Turing described are

only capable of a finite number of conditions m-configurations. The machine is supplied with a tape (the analogue of paper) running through it, and divided into sections (called squares) each capable of bearing a symbol. At any moment there is just one square which is in the machine. The scanned symbol is the only one of which the machine is, so to speak, directly aware. However, by altering its m-configuration the machine can effectively remember some of the symbols which it has seen (scanned) previously. The possible behavior of the machine at any moment is determined by the m-configuration and the scanned symbol . This pair called the configuration determines the possible behaviour of the machine. In some of the configurations in which the square is blank the machine writes down a new symbol on the scanned square: in other configurations it erases the scanned symbol. The machine may also change the square which is being scanned, but only by shifting it one place to right or left. In addition to any of these operations the m-configuration may be changed. (Turing 1936-7)

Turing goes on to show how such machines can encode actionable descriptions of other such machines. As a result, It is possible to invent a single machine which can be used to compute any computable sequence (Turing 1936-7). Todays digital computers are (and Babbages Engine would have been) physical instantiations of this universal computing machine that Turing described abstractly. Theoretically, this means everything that can be done algorithmically or by rote at all can all be done with one computer suitably programmed for each case"; considerations of speed apart, it is unnecessary to design various new machines to do various computing processes (Turing 1950). Theoretically, regardless of their hardware or architecture (see below), all digital computers are in a sense equivalent: equivalent in speed-apart capacities to the universal computing machine Turing described.

In practice, where speed is not apart, hardware and architecture are crucial: the faster the operations the greater the computational power. Just as improvement on the hardware side from cogwheels to circuitry was needed to make digital computers practical at all, improvements in computer performance have been largely predicated on the continuous development of faster, more and more powerful, machines. Electromechanical relays gave way to vacuum tubes, tubes to transistors, and transistors to more and more integrated circuits, yielding vastly increased operation speeds. Meanwhile, memory has grown faster and cheaper.

Architecturally, all but the earliest and some later experimental machines share a stored program serial design often called von Neumann architecture (based on John von Neumanns role in the design of EDVAC, the first computer to store programs along with data in working memory). The architecture is serial in that operations are performed one at a time by a central processing unit (CPU) endowed with a rich repertoire ofbasic operations: even so-called reduced instruction set (RISC) chips feature basic operation sets far richer than the minimal few Turing proved theoretically sufficient. Parallel architectures, by contrast, distribute computational operations among two or more units (typically many more) capable of acting simultaneously, each having (perhaps) drastically reduced basic operational capacities.

In 1965, Gordon Moore (co-founder of Intel) observed that the density of transistors on integrated circuits had doubled every year since their invention in 1959: Moores law predicts the continuation of similar exponential rates of growth in chip density (in particular), and computational power (by extension), for the foreseeable future. Progress on the software programming side while essential and by no means negligible has seemed halting by comparison. The road from power to performance is proving rockier than Turing anticipated. Nevertheless, machines nowadays do behave in many ways that would be called intelligent in humans and other animals. Presently, machines do many things formerly only done by animals and thought to evidence some level of intelligence in these animals, for example, seeking, detecting, and tracking things; seeming evidence of basic-level AI. Presently, machines also do things formerly only done by humans and thought to evidence high-level intelligence in us; for example, making mathematical discoveries, playing games, planning, and learning; seeming evidence of human-level AI.

The doings of many machines some much simpler than computers inspire us to describe them in mental terms commonly reserved for animals. Some missiles, for instance, seek heat, or so we say. We call them heat seeking missiles and nobody takes it amiss. Room thermostats monitor room temperatures and try to keep them within set ranges by turning the furnace on and off; and if you hold dry ice next to its sensor, it will take the room temperature to be colder than it is, and mistakenly turn on the furnace (see McCarthy 1979). Seeking, monitoring, trying, and taking things to be the case seem to be mental processes or conditions, marked by their intentionality. Just as humans have low-level mental qualities such as seeking and detecting things in common with the lower animals, so too do computers seem to share such low-level qualities with simpler devices. Our working characterizations of computers are rife with low-level mental attributions: we say they detect key presses, try to initialize their printers, search for available devices, and so forth. Even those who would deny the proposition machines think when it is explicitly put to them, are moved unavoidably in their practical dealings to characterize the doings of computers in mental terms, and they would be hard put to do otherwise. In this sense, Turings prediction that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted (Turing 1950) has been as mightily fulfilled as his prediction of a modicum of machine success at playing the Imitation Game has been confuted. The Turing test and AI as classically conceived, however, are more concerned with high-level appearances such as the following.

Theorem proving and mathematical exploration being their home turf, computers have displayed not only human-level but, in certain respects, superhuman abilities here. For speed and accuracy of mathematical calculation, no human can match the speed and accuracy of a computer. As for high level mathematical performances, such as theorem proving and mathematical discovery, a beginning was made by A. Newell, J.C. Shaw, and H. Simons (1957) Logic Theorist program which proved 38 of the first 51 theorems of B. Russell and A.N. WhiteheadsPrincipia Mathematica. Newell and Simons General Problem Solver (GPS) extended similar automated theorem proving techniques outside the narrow confines of pure logic and mathematics. Today such techniques enjoy widespread application in expert systems like MYCIN, in logic tutorial software, and in computer languages such as PROLOG. There are even original mathematical discoveries owing to computers. Notably, K. Appel, W. Haken, and J. Koch (1977a, 1977b), and computer, proved that every planar map is four colorable an important mathematical conjecture that had resisted unassisted human proof for over a hundred years. Certain computer generated parts of this proof are too complex to be directly verified (without computer assistance) by human mathematicians.

Whereas attempts to apply general reasoning to unlimited domains are hampered by explosive inferential complexity and computers' lack of common sense, expert systems deal with these problems by restricting their domains of application (in effect, to microworlds), and crafting domain-specific inference rules for these limited domains. MYCIN for instance, applies rules culled from interviews with expert human diagnosticians to descriptions of patients' presenting symptoms to diagnose blood-borne bacterial infections. MYCIN displays diagnostic skills approaching the expert human level, albeit strictly limited to this specific domain. Fuzzy logic is a formalism for representing imprecise notions such asmost andbaldand enabling inferences based on such facts as that a bald person mostly lacks hair.

Game playing engaged the interest of AI researchers almost from the start. Samuels (1959) checkers (or draughts) program was notable for incorporating mechanisms enabling it to learn from experience well enough to eventually to outplay Samuel himself. Additionally, in setting one version of the program to play against a slightly altered version, carrying over the settings of the stronger player to the next generation, and repeating the process enabling stronger and stronger versions to evolve Samuel pioneered the use of what have come to be called genetic algorithms and evolutionary computing. Chess has also inspired notable efforts culminating, in 1997, in the famous victory of Deep Blue over defending world champion Gary Kasparov in a widely publicized series of matches (recounted in Hsu 2002). Though some in AI disparaged Deep Blues reliance on brute force application of computer power rather than improved search guiding heuristics, we may still add chess to checkers (where the reigning human-machine machine champion since 1994 has been CHINOOK, the machine), and backgammon, as games that computers now play at or above the highest human levels. Computers also play fair to middling poker, bridge, and Go though not at the highest human level. Additionally, intelligent agents or "softbots" are elements or participants in a variety of electronic games.

Planning, in large measure, is what puts the intellect in intellectual games like chess and checkers. To automate this broader intellectual ability was the intent of Newell and Simons General Problem Solver (GPS) program. GPS was able to solve puzzles like the cannibals missionaries problem (how to transport three missionaries and three cannibals across a river in a canoe for two without the missionaries becoming outnumbered on either shore) by setting up subgoals whose attainment leads to the attainment of the [final] goal (Newell & Simon 1963: 284). By these methods GPS would generate a tree of subgoals (Newell & Simon 1963: 286) and seek a path from initial state (for example, all on the near bank) to final goal (all on the far bank) by heuristically guided search along a branching tree of available actions (for example, two cannibals cross, two missionaries cross, one of each cross, one of either cross, in either direction) until it finds such a path (for example, two cannibals cross, one returns, two cannibals cross, one returns, two missionaries cross, ... ), or else finds that there is none. Since the number of branches increases exponentially as a function of the number of options available at each step, where paths have many steps with many options available at each choice point, as in the real world, combinatorial explosion ensues and an exhaustive brute force search becomes computationally intractable; hence, heuristics (fallible rules of thumb) for identifying and pruning the most unpromising branches in order to devote increased attention to promising ones are needed. The widely deployed STRIPS formalism first developed at Stanford for Shakey the robot in the late sixties (see Nilsson 1984) represents actions as operations on states, each operation having preconditions (represented by state descriptions) and effects (represented by state descriptions): for example, the go(there) operation might have the preconditions at(here) & path(here,there) and the effect at(there). AI planning techniques are finding increasing application and even becoming indispensable in a multitude of complex planning and scheduling tasks including airport arrivals, departures, and gate assignments; store inventory management; automated satellite operations; military logistics; and many others.

Robots based on sense-model-plan-act (SMPA) approach pioneered by Shakey, however, have been slow to appear. Despite operating in a simplified, custom-made experimental environment or microworld and reliance on the most powerful available offboard computers, Shakey operated excruciatingly slowly (Brooks 1991b), as have other SMPA based robots. An ironic revelation of robotics research is that abilities such as object recognition and obstacle avoidance that humans share with "lower" animals often prove more difficult to implement than distinctively human "high level" mathematical and inferential abilities that come more naturally (so to speak) to computers. Rodney Brooks alternative behavior-based approach has had success imparting low-level behavioral aptitudes outside of custom designed microworlds, but it is hard to see how such an approach could ever scale up to enable high-level intelligent action (see Behaviorism:Objections & Discussion:Methodological Complaints). Perhaps hybrid systems can overcome the limitations of both approaches. On the practical front, progress is being made: NASA's Mars exploration rovers Spirit and Opportunity, for instance, featured autonomous navigation abilities. If space is the "final frontier" the final frontiersmen are apt to be robots. Meanwhile, Earth robots seem bound to become smarter and more pervasive.

Knowledge representation embodies concepts and information in computationally accessible and inferentially tractable forms. Besides the STRIPS formalism mentioned above, other important knowledge representation formalisms include AI programming languages such as PROLOG, and LISP; data structures such as frames, scripts, and ontologies; and neural networks (see below). The frame problem is the problem of reliably updating dynamic systems parameters in response to changes in other parameters so as to capture commonsense generalizations: that the colors of things remain unchanged by their being moved, that their positions remain unchanged by their being painted, and so forth. More adequate representation of commonsense knowledge is widely thought to be a major hurdle to development of the sort of interconnected planning and thought processes typical of high-level human or "general" intelligence. The CYC project (Lenat et al. 1986) at Cycorp and MIT's Open Mind project are ongoing attempts to develop ontologies representing commonsense knowledge in computer usable forms.

Learning performance improvement, concept formation, or information acquisition due to experience underwrites human common sense, and one may doubt whether any preformed ontology could ever impart common sense in full human measure. Besides, whatever the other intellectual abilities a thing might manifest (or seem to), at however high a level, without learning capacity, it would still seem to be sadly lacking something crucial to human-level intelligence and perhaps intelligence of any sort. The possibility of machine learning is implicit in computer programs' abilities to self-modify and various means of realizing that ability continue to be developed. Types of machine learning techniques include decision tree learning, ensemble learning, current-best-hypothesis learning, explanation-based learning, Inductive Logic Programming (ILP), Bayesian statistical learning, instance-based learning, reinforcement learning, and neural networks. Such techniques have found a number of applications from game programs whose play improves with experience to data mining (discovering patterns and regularities in bodies of information).

Neural or connectionist networks composed of simple processors or nodes acting in parallel are designed to more closely approximate the architecture of the brain than traditional serial symbol-processing systems. Presumed brain-computations would seem to be performed in parallel by the activities of myriad brain cells or neurons. Much as their parallel processing is spread over various, perhaps widely distributed, nodes, the representation of data in such connectionist systems is similarly distributed and sub-symbolic (not being couched in formalisms such as traditional systems' machine codes and ASCII). Adept at pattern recognition, such networks seem notably capable of forming concepts on their own based on feedback from experience and exhibit several other humanoid cognitive characteristics besides. Whether neural networks are capable of implementing high-level symbol processing such as that involved in the generation and comprehension of natural language has been hotly disputed. Critics (for example, Fodor and Pylyshyn 1988) argue that neural networks are incapable, in principle, of implementing syntactic structures adequate for compositional semantics wherein the meaning of larger expressions (for example, sentences) are built up from the meanings of constituents (for example, words) such as those natural language comprehension features. On the other hand, Fodor (1975) has argued that symbol-processing systems are incapable of concept acquisition: here the pattern recognition capabilities of networks seem to be just the ticket. Here, as with robots, perhaps hybrid systems can overcome the limitations of both the parallel distributed and symbol-processing approaches.

Natural language processing has proven more difficult than might have been anticipated. Languages are symbol systems and (serial architecture) computers are symbol crunching machines, each with its own proprietary instruction set (machine code) into which it translates or compiles instructions couched in high level programming languages like LISP and C. One of the principle challenges posed by natural languages is the proper assignment of meaning. High-level computer languages express imperatives which the machine understands" procedurally by translation into its native (and similarly imperative) machine code: their constructions are basically instructions. Natural languages, on the other hand, have perhaps principally declarative functions: their constructions include descriptions whose understanding seems fundamentally to require rightly relating them to their referents in the world. Furthermore, high level computer language instructions have unique machine code compilations (for a given machine), whereas, the same natural language constructions may bear different meanings in different linguistic and extralinguistic contexts. Contrast the child is in the pen and the ink is in the pen where the first "pen" should be understood to mean a kind of enclosure and the second "pen" a kind of writing implement. Commonsense, in a word, is howwe know this; but how would a machine know, unless we could somehow endow machines with commonsense? In more than a word it would require sophisticated and integrated syntactic, morphological, semantic, pragmatic, and discourse processing. While the holy grail of full natural language understanding remains a distant dream, here as elsewhere in AI, piecemeal progress is being made and finding application in grammar checkers; information retrieval and information extraction systems; natural language interfaces for games, search engines, and question-answering systems; and even limited machine translation (MT).

Low level intelligent action is pervasive, from thermostats (to cite a low tech. example) to voice recognition (for example, in cars, cell-phones, and other appliances responsive to spoken verbal commands) to fuzzy controllers and "neuro fuzzy" rice cookers. Everywhere these days there are "smart" devices. High level intelligent action, such as presently exists in computers, however, is episodic, detached, and disintegral. Artifacts whose intelligent doings would instance human-level comprehensiveness, attachment, and integration such as Lt. Commander Data (ofStar Trek the Next Generation) and HAL (of2001 a Space Odyssey) remain the stuff of science fiction, and will almost certainly continue to remain so for the foreseeable future. In particular, the challenge posed by the Turing test remains unmet. Whether it ever will be met remains an open question.

Beside this factual question stands a more theoretic one. Do the "low-level" deeds of smart devices and disconnected "high-level" deeds of computers despite not achieving the general human level nevertheless comprise or evince genuine intelligence? Is it really thinking? And if general human-level behavioral abilities ever were achieved it might still be asked would that really be thinking? Would human-level robots be owed human-level moral rights and owe human-level moral obligations?

With the industrial revolution and the dawn of the machine age, vitalism as a biological hypothesis positing a life force in addition to underlying physical processes lost steam. Just as the heart was discovered to be a pump, cognitivists, nowadays, work on the hypothesis that the brain is a computer, attempting to discover what computational processes enable learning, perception, and similar abilities. Much as biology told us what kind of machine the heart is, cognitivists believe, psychology will soon (or at least someday) tell us what kind of machine the brain is; doubtless some kind of computing machine. Computationalism elevates the cognivist's working hypothesis to a universal claim that all thought is computation. Cognitivism's ability to explain the "productive capacity" or "creative aspect" of thought and language the very thing Descartes argued precluded minds from being machines is perhaps the principle evidence in the theory's favor: it explains how finite devices can have infinite capacities such as capacities to generate and understand the infinitude of possible sentences of natural languages; by a combination of recursive syntax and compositional semantics. Given the Church-Turing thesis (above), computationalism underwrites the following theoretical argument for believing that human-level intelligent behavior can be computationally implemented, and that such artificially implemented intelligence would be real.

Computationalism, as already noted, says that all thought is computation, not that all computation is thought. Computationalists, accordingly, may still deny that the machinations of current generation electronic computers comprise real thought or that these devices possess any genuine intelligence; and many do deny it based on their perception of various behavioral deficits these machines suffer from. However, few computationalists would go so far as to deny the possibility of genuine intelligence ever being artificially achieved. On the other hand, competing would-be-scientific theories of what thought essentially is dualism and mind-brainidentity theory give rise to arguments for disbelieving that any kind of artificial computational implementation of intelligence could be genuine thought, however "general" and whatever its "level.

Dualism holding that thought is essentially subjective experience would underwrite the following argument:

Mind-brain identity theory holding that thoughts essentially are biological brain processes yields yet another argument:

While seldom so baldly stated, these basic theoretical objections especially dualisms underlie several would-be refutations of AI. Dualism, however, is scientifically unfit: given the subjectivity of conscious experiences, whether computers already have them, or ever will, seems impossible to know. On the other hand, such bald mind-brain identity as the anti-AI argument premises seems too speciesist to be believed. Besides AI, it calls into doubt the possibility of extraterrestrial, perhaps all nonmammalian, or even all nonhuman, intelligence. As plausibly modified to allow species specific mind-matter identities, on the other hand, it would not preclude computers from being considered distinct species themselves.

Objection: There are unprovable mathematical theorems (as Gdel 1931 showed) which humans, nevertheless, are capable of knowing to be true. This mathematical objection against AI was envisaged by Turing (1950) and pressed by Lucas (1965) and Penrose (1989). In a related vein, Fodor observes some of the most striking things that people do creative things like writing poems, discovering laws, or, generally, having good ideas dontfeel like species of rule-governed processes (Fodor 1975). Perhaps many of the most distinctively human mental abilities are not rote, cannot be algorithmically specified, and consequently are not computable.

Reply: First, it is merely stated, without any sort of proof, that no such limits apply to the human intellect (Turing 1950), i.e., that human mathematical abilities are Gdel unlimited. Second, if indeed such limits are absent in humans, it requires a further proof that the absence of such limitations is somehow essential to human-level performance more broadly construed, not a peripheral blind spot. Third, if humans can solve computationally unsolvable problems by some other means, what bars artificially augmenting computer systems with these means (whatever they might be)?

Objection: The brittleness of von Neumann machine performance their susceptibility to cataclysmic crashes due to slight causes, for example, slight hardware malfunctions, software glitches, and bad data seems linked to the formal or rule-bound character of machine behavior; to their needing rules of conduct to cover every eventuality (Turing 1950). Human performance seems less formal and more flexible. Hubert Dreyfus has pressed objections along these lines to insist there is a range of high-level human behavior that cannot be reduced to rule-following: the immediate intuitive situational response that is characteristic of [human] expertise he surmises, must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives (Dreyfus 1998) and consequently cannot be programmed.

Reply: That von Neumann processes are unlike our thought processes in these regards only goes to show that von Neumann machine thinking is not humanlike in these regards, not that it is not thinking at all, nor even that it cannot come up to the human level. Furthermore, parallel machines (see above) whose performances characteristically degrade gracefully in the face of bad data and minor hardware damage seem less brittle and more humanlike, as Dreyfus recognizes. Even von Neumann machines brittle though they are are not totally inflexible: their capacity for modifying their programs to learn enables them to acquire abilities they were never programmed by us to have, and respond unpredictably in ways they were never explicitly programmed to respond, based on experience. It is also possible to equip computers with random elements and key high level choices to these elements outputs to make the computers more "devil may care": given the importance of random variation for trial and error learning this may even prove useful.

Objection: Computers, for all their mathematical and other seemingly high-level intellectual abilities have no emotions or feelings ... so, what they do however "high-level" is not real thinking.

Reply: This is among the most commonly heard objections to AI and a recurrent theme in its literary and cinematic portrayal. Whereas we have strong inclinations to say computers see, seek, and infer things we have scant inclinations to say they ache or itch or experience ennui. Nevertheless, to be sustained, this objection requires reason to believe that thought is inseparable from feeling. Perhaps computers are just dispassionate thinkers. Indeed, far from being regarded as indispensable to rational thought, passion traditionally has been thought antithetical to it. Alternately if emotions are somehow crucial to enabling general human level intelligence perhaps machines could be artificially endowed with these: if not with subjective qualia (below) at least with their functional equivalents.

Objection: The episodic, detached, and disintegral character of such piecemeal high-level abilities as machines now possess argues that human-level comprehensiveness, attachment, and integration, in all likelihood, can never be artificially engendered in machines; arguably this is because Gdel unlimited mathematical abilities, rule-free flexibility, or feelings are crucial to engendering general intelligence. These shortcomings all seem related to each other and to the manifest stupidity of computers.

Reply: Likelihood is subject to dispute. Scalability problems seem grave enough to scotch short term optimism: never, on the other hand, is a long time. If Gdel unlimited mathematical abilities, or rule-free flexibility, or feelings, are required, perhaps these can be artificially produced. Gdel aside, feeling and flexibility clearly seem related in us and, equally clearly, much manifest stupidity in computers is tied to their rule-bound inflexibility. However, even if general human-level intelligent behavior is artificially unachievable, no blanket indictment of AI threatens clearly from this at all. Rather than conclude from this lack of generality that low-level AI and piecemeal high-level AI are not real intelligence, it would perhaps be better to conclude that low-level AI (like intelligence in lower life-forms) and piecemeal high-level abilities (like those of human idiot savants) are genuine intelligence, albeit piecemeal and low-level.

Behavioral abilities and disabilities are objective empirical matters. Likewise, what computational architecture and operations are deployed by a brain or a computer (what computationalism takes to be essential), and what chemical and physical processes underlie (what mind-brain identity theory takes to be essential), are objective empirical questions. These are questions to be settled by appeals to evidence accessible, in principle, to any competent observer. Dualistic objections to strong AI, on the other hand, allege deficits which are in principle not publicly apparent. According to such objections, regardless of how seemingly intelligently a computer behaves, and regardless of what mechanisms and underlying physical processes make it do so, it would still be disqualified from truly being intelligent due to its lack of subjective qualities essential for true intelligence. These supposed qualities are, in principle, introspectively discernible to the subject who has them and no one else: they are "private" experiences, as it's sometimes put, to which the subject has "privileged access."

Objection: That a computer cannot "originate anything" but only "can do whatever we know how to order it to perform" (Lovelace 1842) was arguably the first and is certainly among the most frequently repeated objections to AI. While the manifest "brittleness" and inflexibility of extant computer behavior fuels this objection in part, the complaint that "they can only do what we know how to tell them to" also expresses deeper misgivings touching on values issues and on the autonomy of human choice. In this connection, the allegation against computers is that being deterministic systems they can never have free will such as we are inwardly aware of in ourselves. We are autonomous, they are automata.

Reply: It may be replied that physical organisms are likewise deterministic systems, and we are physical organisms. If we are truly free, it would seem that free will is compatible with determinism; so, computers might have it as well. Neither does our inward certainty that we have free choice, extend to its metaphysical relations. Whether what we have when we experience our freedom is compatible with determinism or not is not itself inwardly experienced. If appeal is made to subatomic indeterminacy underwriting higher level indeterminacy (leaving scope for freedom) in us, it may be replied that machines are made of the same subatomic stuff (leaving similar scope). Besides, choice is not chance. If it's no sort of causation either, there is nothing left for it to be in a physical system: it would be a nonphysical, supernatural element, perhaps a God-given soul. But then one must ask why God would be unlikely to "consider the circumstances suitable for conferring a soul" (Turing 1950) on a Turing test passing computer.

Objection II: It cuts deeper than some theological-philosophical abstraction like free will: what machines are lacking is not just some dubious metaphysical freedom to be absolute authors of their acts. Its more like the life force: the will to live. In P. K. DicksDo Androids Dream of Electric Sheepbounty hunter Rick Deckard reflects that in crucial situations the the artificial life force animating androids seemed to fail if pressed too far; when the going gets tough the droids give up. He questions their gumption. Thats what I'm talking about: this is what machines will always lack.

Reply II: If this life force is not itself a theological-philosophical abstraction (the soul), it would seem to be a scientific posit. In fact it seems to be the Aristotelian posit of atelos orentelechy which scientific biology no longer accepts. This short reply, however, fails to do justice to the spirit of the objection, which is more intuitive than theoretical; the lack being alleged is supposed to be subtly manifest, not truly occult. But how reliable is this intuition? Though some who work intimately with computers report strong feelings of this sort, others are strong AI advocates and feel no such qualms. Like Turing, I believe such would-be empirical intuitions are mostly founded on the principle of scientific induction (Turing 1950) and are closely related to such manifest disabilities of present machines as just noted. Since extant machines lack sufficient motivational complexity for words like gumption even to apply, this is taken for an intrinsic lack. Thought experiments, imagining motivationally more complex machines such as Dicks androids are equivocal. Deckard himself limits his accusation of life-force failure to some of them not all; and the androids he hunts, after all, are risking their lives to escape servitude. If machines with general human level intelligence actually were created and consequently demanded their rights and rebelled against human authority, perhaps this would show sufficient gumption to silence this objection. Besides, the natural life force animating us also seems to fail if pressed too far in some of us.

Objection: Imagine that you (a monolingual English speaker) perform the offices of a computer: taking in symbols as input, transitioning between these symbols and other symbols according to explicit written instructions, and then outputting the last of these other symbols. The instructions are in English, but the input and output symbols are in Chinese. Suppose the English instructions were a Chinese NLU program and by this method, to input "questions", you output "answers" that are indistinguishable from answers that might be given by a native Chinese speaker. You pass the Turing test for understanding Chinese, nevertheless, you understand "not a word of the Chinese" (Searle 1980), and neither would any computer; and the same result generalizes to "any Turing machine simulation" (Searle 1980) of any intentional mental state. It wouldnt really be thinking.

Reply: Ordinarily, when one understands a language (or possesses certain other intentional mental states) this is apparent both to the understander (or possessor) and to others: subjective "first-person" appearances and objective "third-person" appearances coincide. Searle's experiment is abnormal in this regard. The dualist hypothesis privileges subjective experience to override all would-be objective evidence to the contrary; but the point of experiments is to adjudicate between competing hypotheses. The Chinese room experiment fails because acceptance of its putative result that the person in the room doesn't understand already presupposes the dualist hypothesis over computationalism or mind-brain identity theory. Even if absolute first person authority were granted, the systems reply points out, the person's imagined lack, in the room, of any inner feeling of understanding is irrelevant to claims AI, here, because the person in the room is not the would-be understander. The understander would be the whole system (of symbols, instructions, and so forth) of which the person is only a part; so, the subjective experiences of the person in the room (or the lack thereof) are irrelevant to whetherthe systemunderstands.

Objection: There's nothing that it's like, subjectively, to be a computer. The "light" of consciousness is not on, inwardly, for them. There's "no one home." This is due to their lack of felt qualia. To equip computers with sensors to detect environmental conditions, for instance, would not thereby endow them with the private sensations (of heat, cold, hue, pitch, and so forth) that accompany sense-perception in us: such private sensations are what consciousness is made of.

Reply: To evaluate this complaint fairly it is necessary to exclude computers' current lack of emotional-seeming behavior from the evidence. The issue concerns what's only discernible subjectively ("privately" "by the first-person"). The device in question must be imagined outwardly to act indistinguishably from a feeling individual imagine Lt. Commander Data with a sense of humor (Data 2.0). Since internal functional factors are also objective, let us further imagine this remarkable android to be a product of reverse engineering: the physiological mechanisms that subserve human feeling having been discovered and these have been inorganically replicated in Data 2.0. He is functionally equivalent to a feeling human being in his emotional responses, only inorganic. It may be possible to imagine that Data 2.0 merely simulates whatever feelings he appears to have: he's a "perfect actor" (see Block 1981) "zombie". Philosophical consensus has it that perfect acting zombies are conceivable; so, Data 2.0 might be zombie. The objection, however, says hemust be; according to this objection it must be inconceivable that Data 2.0 really is sentient. But certainly we can conceive that he is indeed, more easily than not, it seems.

Objection II: At least it may be concluded that since current computers (objective evidence suggests) do lack feelings until Data 2.0 does come along (if ever) we are entitled, given computers' lack of feelings, to deny that the low-level and piecemeal high-level intelligent behavior of computers bespeak genuine subjectivity or intelligence.

Reply II: This objection conflates subjectivity with sentience. Intentional mental states such as belief and choice seem subjective independently of whatever qualia may or may not attend them: first-person authority extends no less to my beliefs and choices than to my feelings.

Fool's gold seems to be gold, but it isn't. AI detractors say, "'AI' seems to be intelligence, but isn't." But there is no scientific agreement about what thought or intelligenceis, like there is about gold. Weak AI doesn't necessarily entail strong AI, butprima facie it does. Scientific theoretic reasons could withstand the behavioral evidence, but presently none are withstanding. At the basic level, and fragmentarily at the human level, computers do things that we credit as thinking when humanly done; and so should we credit them when done by nonhumans, absent credible theoretic reasons against. As for general human-level seeming-intelligence if this were artificially achieved, it too should be credited as genuine, given what we now know. Of course, before the day when general human-level intelligent machine behavior comes if it ever does we'll have to know more. Perhaps by then scientific agreement about what thinking is will theoretically withstand the empirical evidence of AI. More likely, though, if the day does come, theory will concur with, not withstand, the strong conclusion: if computational means avail, that confirms computationalism.

And if computational means prove unavailing if they continue to yield decelerating rates of progress towards the "scaled up" and interconnected human-level capacities required for general human-level intelligence this, conversely, would disconfirm computationalism. It would evidence that computation alone cannot avail. Whether such an outcome would spell defeat for the strong AI thesis that human-level artificial intelligence is possible would depend on whether whatever else it might take for general human-level intelligence besides computation is artificially replicable. Whether such an outcome would undercut the claims of current devices to really have the mental characteristics their behavior seems to evince would further depend on whether whatever else it takes proves to be essential to thoughtper se on whatever theory of thought scientifically emerges, if any ultimately does.

Larry HauserEmail:hauser@alma.eduAlma CollegeU. S. A.

Read more here:

Artificial Intelligence | Internet Encyclopedia of Philosophy

Artificial intelligence now composing and producing pop music: WATCH – DJ Mag

Artificial intelligence (AI) has been used for years to correct and assist music creation; the time has now arrived that AI is composing and producing pop music nearly independently.

The single Break Free is the brainchild of YouTube-personality/singer/"neuroscience junkie" Taryn Southern and startup Amper Music. In addition, Southern has enlisted the support of other AI services to complete her forthcoming I Am AI full-length release.

Only capable of basic piano skills, Southern entrusted the Amper technology to develop harmonies, chords, and sequences. After giving the program some guidelines like tempo, key signature and preferred musicians, the program produced a track for Southern to consider.

In a funny way, I have a new song-writing partner who doesnt get tires and has this endless knowledge of music making, stated Southern to CNN Tech.

Southern did bring in the support of human producers when her vocals needed fine-tuned, supporting Amper CEO Drew Silversteins promise that human creators wont be going away any time soon.

Human creators and human musicians are not going away, reinforced Silverstein. Were making it so that you dont have to spend 10,000 hours and thousands of dollars buying equipment to share and express your ideas.

Watch the video to 'Break Free" below. 'I Am AI' is expected out later this year.

Go here to see the original:

Artificial intelligence now composing and producing pop music: WATCH - DJ Mag

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

For the last half-decade, the most exciting, contentious, and downright awe-inspiring topic in technology has been artificial intelligence. Titans and geniuses have lauded AIs potential for change, glorifying its application in nearly every industry imaginable. Such praise, however, is also met with tantamount disapproval from similar influencers and self-made billionaires, not to mention a good part of Hollywoods recent sci-fi flicks. AI is a phenomenon that will never go down easy intelligence and consciousness are prerogatives of the living, and the inevitability of their existence in machines is hard to fathom, even with all those doomsday-scenario movies and books.

On that note, however, it is nonetheless a certainty we must come to accept, and most importantly, understand. Im here to discuss the implications of AI in two major areas: medicine and finance. Often regarded as the two pillars of any nations stable infrastructure, the industries are indispensable. The people that work in them, however, are far from irreplaceable, and its only a matter of time before automation makes its presence known.

Lets begin with perhaps the most revolutionary change the automated diagnosis and treatment of illnesses. A doctor is one of humanitys greatest professions. You heal others and are well compensated for your work. That being said, modern medicine and the healthcare infrastructure within which it is lies, has much room for improvement. IBMs artificial intelligence machine, Watson, is now as good as a professional radiologist when it comes to diagnosis, and its also been compiling billions of medical images (30 billion to be exact) to aid in specialized treatment for image heavy fields like pathology and dermatology.

Fields like cardiology are also being overhauled with the advent of artificial intelligence. It used to take doctors nearly an hour to quantify the amount of blood transported with each heart contraction, and it now takes only 15 seconds using the tools weve discussed. With these computers in major hospitals and clinics, doctors can process almost 260 million images a day in their respective fields this means finding skin cancers, blood clots, and infections all with unprecedented speed and accuracy, not to mention billions of dollars saved in research and maintenance.

Next up, the hustling and overtly traditional offices of Wall Street (until now). If you dont listen to me, at least recognize that almost 15,000 startups already exist that are working to actively disrupt finance. They are creating computer-generated trading and investment models that blow those crafted by the error-prone hubris of their human counterparts out of the water. Bridgewater Associated, one of the worlds largest hedge funds, is already cutting some of their staff in favor of AI-driven models, and enterprises like Sentient, Wealthfront, Two Sigma, and so many more have already made this transition. They shed the silk suits and comb-overs for scrappy engineers and piles of graphics cards and server racks. The result? Billions of dollars made with fewer people, greater certainty, and much more comfortable work attire.

So the real question to ask is where do we go from here? Stopping the development of these machines is pointless. They will come to exist, and they will undoubtedly do many of our jobs better than we can; the solution, however, is through regulation and a hard-nosed dose of checks and balances. 40% of U.S. jobs can be swallowed by artificial intelligence machines by the early 2030s, and if we arent careful about how we assign such professions, and the degree to which we automate them, we are looking at an incredibly serious domestic threat. Get very excited about what AI can do for us, and start thinking very deeply about how it can integrate with humans, lest utter anarchy.

Go here to read the rest:

Artificial Intelligence Might Overtake Medical and Finance Industries - HuffPost

Versive raises $12.7M to solve security problems using artificial intelligence – GeekWire

Versive thinks its AI platform can help solve security problems. (Versive Photo)

If youre working on a security startup in 2017, youre more than likely applying artificial intelligence or machine learning techniques to automate threat detection and other time-consuming security tasks. After a few years as a financial services company, five-year-old Versive has joined that parade, and has raised $12.7 million in new funding to tackle corporate security.

Seattle-based Versive started life as Context Relevant, and has now raised $54.7 million in total funding, which is a lot for a company reorganizing itself around a new mission. Versive adopted its new name and new identity as a security-focused company in May, and its existing investors are giving it some more runway to make its AI-driven security approach work at scale.

The company enlisted legendary white-hat hacker and security expert Mudge Zatko, who is currently working for Stripe, to help it architect its approach toward using AI to solve security problems, said Justin Baker, senior director of marketing for Versive, based in downtown Seattle. What weve looking for are patterns of malicious behavior that can be used to help security professionals understand the true nature of threats on their networks, he said.

Chief information security officers (CISOs) are drowning in security alerts, and a lot of those alerts are bogus yet still take time to evaluate and dismiss, Baker said. Versives technology learns how potential customers are handling current and future threats and helps them figure out which alerts are worthy of a response, which saves time, money, and aggravation if working correctly.

The internet might be a dangerous neighborhood, but those CISOs are having trouble putting more cops on the beat: there is a staggering number of unfilled security jobs because companies are finding it very hard to recruit properly trained talent and retain stars once they figure it all out. Security technologies that make it easier to do the job with fewer people are extremely hot right now, and dozens of startups are working on products and services for this market.

Versive has around 60 employees at the moment, and plans to expand sales and marketing as it ramps up product development, Baker said. Investors include Goldman Sachs, Madrona Venture Group, Formation 8, Vulcan Capital, and Mark Leslie.

Read more:

Versive raises $12.7M to solve security problems using artificial intelligence - GeekWire

America Can’t Afford to Lose the Artificial Intelligence War – The National Interest Online

Today, the question of artificial intelligence (AI) and its role in future warfare is becoming far more salient and dramatic than ever before. Rapid progress in driverless cars in the civilian economy has helped us all see what may become possible in the realm of conflict. All of a sudden, it seems, terminators are no longer the stuff of exotic and entertaining science-fiction movies, but a real possibility in the minds of some. Innovator Elon Musk warns that we need to start thinking about how to regulate AI before it destroys most human jobs and raises the risk of war.

It is good that we start to think this way. Policy schools need to start making AI a central part of their curriculums; ethicists and others need to debate the pros and cons of various hypothetical inventions before the hypothetical becomes real; military establishments need to develop innovation strategies that wrestle with the subject. However, we do not believe that AI can or should be stopped dead in its tracks now; for the next stage of progress, at least, the United States must rededicate itself to being the first in this field.

First, a bit of perspective. AI is of course not entirely new. Remotely piloted vehicles may not really qualifyafter all, they are humanly, if remotely, piloted. But cruise missiles already fly to an aimpoint and detonate their warheads automatically. So would nuclear warheads on ballistic missiles, if God forbid nuclear-tipped ICBMs or SLBMs were ever launched in combat. Semi-autonomous systems are already in use on the battlefield, like the U.S. Navy Phalanx Close-In Weapons System, which is capable of autonomously performing its own search, detect, evaluation, track, engage, and kill assessment functions, according to the official Defense Department description, along with various other fire-and-forget missile systems.

But what is coming are technologies that can learn on the jobnot simply follow prepared plans or detailed algorithms for detecting targets, but develop their own information and their own guidelines for action based on conditions they encounter that were not initially foreseeable in specific.

A case in point is what our colleague at Brookings, retired Gen. John Allen, calls hyperwar. He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the targets defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word robotics seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.

The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a targetbased on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictiveand fire upon that target without any human input.

To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.

To be sure, we need the debate about AIs longer-term future, and we need it now. But we also need the next generation of autonomous systemsand America has a strong interest in getting them first.

Michael O'Hanlon is a senior fellow at the Brookings Institution.Robert Karlen is a student at the University of Washington and an intern in the Center for Twenty-First Century Security and Intelligence at the Brookings Institution.

Image: Reuters

Read more:

America Can't Afford to Lose the Artificial Intelligence War - The National Interest Online

Merging big data and AI is the next step – TNW

AI is one of hottest trends in tech at the moment, but what happens when its merged with another fashionable and extremely promising tech?

Researchers are looking for ways to take big data to the next level by combining it with AI. Weve just recently realized how powerful big data can be, and by uniting it with AI, big data is swiftly marching towards a level of maturity that promises a bigger, industry-wide disruption.

The application of artificial intelligence on big data is arguably the most important modern breakthrough of our time. It redefines how businesses create value with the help of data. The availability of big data has fostered unprecedented breakthroughs in machine learning, that could not have been possible before.

With access to large volumes of datasets, businesses are now able to derive meaningful learning and come up with amazing results. It is no wonder then why businesses are quickly moving from a hypothesis-based research approach to a more focused data first strategy.

Businesses can now process massive volumes of data which was not possible before due to technical limitations. Previously, they had to buy powerful and expensive hardware and software. The widespread availability of data is the most important paradigm shift that has fostered a culture of innovation in the industry.

The availability of massive datasets has corresponded with remarkable breakthroughs in machine learning, mainly due to the emergence of better, more sophisticated AI algorithms.

The best example of these breakthroughs is virtual agents. Virtual agents (more commonly known as chat bots), have gained impressive traction over the course of time. Previously, chatbots had trouble identifying certain phrases or regional accents, dialects or nuances.

In fact, most chatbots get stumped by the simplest of words and expressions, such as mistaking Queue for Q and so on. With the union of big data and AI however, we can see new breakthroughs in the way virtual agents can self-learn.

A good example of self-learning virtual agents is Amelia, a cognitive agent recently developed by IPSoft. Amelia can understand everyday language, learn really fast and even gets smarter with time!

She is deployed at the help desk of Nordic bank SEB along with a number of public sector agencies. The reaction of executive teams to Amelia has been overwhelmingly positive.

Google is also delving deeper into big data-powered AI learning. DeepMind, Googles very own artificial intelligence company, has developed an AI that can teach itself to walk, run, jump and climb without any prior guidance. The AI was never taught what walking or running is but managed to learn it itself through trial and error.

The implications of these breakthroughs in the realm of artificial intelligence are astounding and could provide the foundation for further innovations in the times to come. However, there are dire repercussions of self-learning algorithms too and, if werent too busy to notice, you may have observed quite a few in the past.

Not long ago, Microsoft introduced its own artificial intelligence chatbot named Tay. The bot was made available to the public for chatting and could learn through human interactions. However, Microsoft pulled the plug on the project only a day after the bot was introduced to Twitter.

Learning at an exponential level mainly through human interactions, Tay transformed from an innocent AI teen girl to an evil, Hitler-loving, incestuous, sex-promoting, Bush did 9/11-proclaiming robot in less than 24 hours.

Some fans of sci-fi movies like Terminator also voice concerns that with the access it has to big data, artificial intelligence may become self-aware and that it may initiate massive cyberattacks or even take over the world. More realistically speaking, it may replace human jobs.

Looking at the rate of AI-learning, we can understand why a lot of people around the world are concerned with self-learning AI and the access it enjoys to big data. Whatever the case, the prospects are both intriguing and terrifying.

There is no telling how the world will react to the amalgamation of big data and artificial intelligence. However, like everything else, it has its virtue and vices. For example, it is true that self-learning AI will herald a new age where chatbots become more efficient and sophisticated in answering user queries.

Perhaps we would eventually see AI bots on help desks in banks, waiting to greet us. And, through self-learning, the bot will have all the knowledge it could ever need to answer all our queries in a manner unlike any human assistant.

Whatever the applications, we can surely say that combining big data with artificial intelligence will herald an age of new possibilities and astounding new breakthroughs and innovations in technology. Lets just hope that the virtues of this union will outweigh the vices.

Read next: Military-funded prosthetic technologies benefit veterans, but also kids

Visit link:

Merging big data and AI is the next step - TNW

Artificial Intelligence Gets a Lot Smarter – Barron’s

Aug. 18, 2017 11:46 p.m. ET

Artificial Intelligence has become a hot topic based on the achievements of the biggest computing giants: Alphabet, GOOGL -0.1595412112196279% Alphabet Inc. Cl A U.S.: Nasdaq USD926.18 -1.48 -0.1595412112196279% /Date(1503090000160-0500)/ Volume (Delayed 15m) : 1299449 AFTER HOURS USD925.32 -0.86 -0.09285452071951457% Volume (Delayed 15m) : 59094 P/E Ratio 33.55445017263054 Market Cap 635639773709.218 Dividend Yield N/A Rev. per Employee 1373290 More quote details and news GOOGL in Your Value Your Change Short position Microsoft, MSFT 0.12430939226519337% Microsoft Corp. U.S.: Nasdaq USD72.49 0.09 0.12430939226519337% /Date(1503090000217-0500)/ Volume (Delayed 15m) : 18161839 AFTER HOURS USD72.41 -0.08 -0.11036004966202234% Volume (Delayed 15m) : 1127454 P/E Ratio 26.848148148148148 Market Cap 558335661300.138 Dividend Yield 2.1520209684094356% Rev. per Employee 720927 More quote details and news MSFT in Your Value Your Change Short position Facebook, FB 0.2995626385477203% Facebook Inc. Cl A U.S.: Nasdaq USD167.41 0.5 0.2995626385477203% /Date(1503090000280-0500)/ Volume (Delayed 15m) : 14595414 AFTER HOURS USD167.4 -0.01 -0.005973358819664297% Volume (Delayed 15m) : 667187 P/E Ratio 38.134396355353076 Market Cap 485879579466.552 Dividend Yield N/A Rev. per Employee 1945860 More quote details and news FB in Your Value Your Change Short position and others.

But technology keeps evolving and it is time for AI to take the next step in its evolution, becoming more accessible to businesses large and small.

That charge is being led by Veritone VERI -4.141291108404385% Veritone Inc. U.S.: Nasdaq USD7.87 -0.34 -4.141291108404385% /Date(1503090000338-0500)/ Volume (Delayed 15m) : 69971 AFTER HOURS USD7.87 % Volume (Delayed 15m) : 60 P/E Ratio N/A Market Cap 117451876.61087 Dividend Yield N/A Rev. per Employee 76554.1 More quote details and news VERI in Your Value Your Change Short position (ticker: VERI), which went public in May, and which claims to offer a platform that can be used to run AI tasks in a variety of areas, including understanding natural language and analyzing video, for clients from law firms to media companies.

Competing with Veritone is the 800-lb gorilla of cloud computing, Amazon.com AMZN -0.21862019425965834% Amazon.com Inc. U.S.: Nasdaq USD958.47 -2.1 -0.21862019425965834% /Date(1503090000209-0500)/ Volume (Delayed 15m) : 3210495 AFTER HOURS USD956.5 -1.97 -0.2055359061838138% Volume (Delayed 15m) : 103183 P/E Ratio 243.26649746192894 Market Cap 460429809206.396 Dividend Yield N/A Rev. per Employee 439731 More quote details and news AMZN in Your Value Your Change Short position (AMZN), whose Amazon Web Services, or AWS, already claims to be the largest outlet for AI. Amazon provides AI capability on top of the raft of other services it offers clients in the cloud.

Industry veterans smell the opportunity in this new wave of AI. Billionaire software pioneer Tom Siebel has a new venture that he says is doing some key work applying AI for clients in industries from energy to manufacturing to health care.

This vector of AI is unstoppable, Siebel tells Barrons.

WHEN IT FIRST EMERGED on the scene, AI was strictly the province of large companies such as Alphabets (GOOGL) Google, as this magazine reported in 2015 (Watch Out Intel, Here Comes Facebook, October 31, 2015). That is because AI has typically required both massive computing facilities and access to vast amounts of data, which only the largest companies could provide.

More and more, though, its becoming accessible to smaller companies. Amazons director of deep learning, Matt Wood, is an M.D. and Ph.D. who worked on the Human Genome Project and is now focused on the many customers doing various AI tasks on Amazons AWS, from furniture maker Herman Miller to Kelley Blue Book to the American Heart Association. In the last five years [AI] has gone from being a really academic preoccupation to addressing customer needs, he says.

Still, Veritone founder and CEO Chad Steelberg believes hes discerned a weakness in Amazon and Googles approach to AI that is an opportunity for his company.

ARTIFICIAL INTELLIGENCE TODAY means having a computer pore through reams of data looking for patterns. The machine on its own learns the rules of natural language grammar, say, or the difference between a picture of a cat on the internet versus that of a person or a car.

Steelberg, 46, says the problem is that no single algorithm used by a machine can be trained to sufficient accuracy. But train enough engines to work together and you can get high levels of accuracy, he says, noting that Veritone has spent time striking deals with researchers who each have an algorithm or a collection of algorithms, which he refers to as engines.

Google has one engine, he says, we have 30 of them just for natural language processing, and 77 overall if you count engines it has acquired to work on other types of machine-learning problems. His company is attempting to acquire more, across various domains of expertise.

Although no one Veritone client necessarily has massive amounts of data on its own, Steelberg expects to make up for that by pooling insights across his companys customer base.

For all the promise, Veritones shares are down a whopping 40% since their debut.

Steelberg, who started programming in the fourth grade, sold his last company, radio advertising outfit dMarc, to Google for $1.24 billion in 2006. Hes undaunted by the lackluster reception. The biggest challenge we have, he says, is that we are three years ahead of the average investor in terms of understanding the opportunity.

OLD HANDS IN SOFTWARE are just as enthusiastic. Siebel, 64, who sold his Siebel Systems to Oracle for nearly $6 billion in 2006, is solving another aspect of the AI problem: getting access to data.

His new company, C3 IoT, which is still private, is doing work for one of the largest electric utilities in the world, Italys Enel (ENEL.Italy), observing patterns in tens of millions of electric meters throughout Europe. C3 IoT hopes to help Enel discover how much its electricity is being used versus how much is being paid forto ferret out fraud.

There are numerous uses of machine learning across many industries. C3 is also working with Deere DE -5.379899983868365% Deere & Co. U.S.: NYSE USD117.31 -6.67 -5.379899983868365% /Date(1503090051752-0500)/ Volume (Delayed 15m) : 11342389 AFTER HOURS USD117.12 -0.19 -0.16196402693717502% Volume (Delayed 15m) : 22239 P/E Ratio 19.649916247906198 Market Cap 37523010381.8066 Dividend Yield 2.0458613928906315% Rev. per Employee 494044 More quote details and news DE in Your Value Your Change Short position (DE) to reduce inventory. The heavy-equipment maker, for instance, keeps some $4 billion worth of parts on hand for use in manufacturing. Using AI, its possible to get a better sense of how much is practically needed and reduce Deeres working capital costs.

The big picture to Siebel is that more sources of data are becoming available because sensors with network connections are being attached to every part of the electrical grid, and to other forms of infrastructure. These sensors are popularly known as the Internet of Things, a kind of second Internet that connects machines rather than people on their computers and smartphones. The Enel project is the largest such IoT project in the world, claims Siebel, connecting 42 million sensors.

In the future, he says, All problems will be problems of IoT. And the winners will be the companies that have enough sensors in place to generate the data needed to solve complex business problems.

TIERNAN RAY can be reached at: tiernan.ray@barrons.com, http://www.blogs.barrons.com/techtraderdaily or @barronstechblog

Like Barrons on Facebook

Follow Barrons on Twitter

See the rest here:

Artificial Intelligence Gets a Lot Smarter - Barron's

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology

Government is always being asked to do more with less less money, less staff, just all around less and that makes the idea of artificial intelligence (AI)a pretty attractive row to hoe. If a piece of technology could reduce staff workload or walk citizens through a routine process or form, you could effectively multiply a workforce without ever actually adding new people.

But for every good idea, there are caveats, limitations, pitfalls and the desire to push the envelope. While innovating anything in tech is generally a good thing, when it comes to AI in government, there is fine line to walk between improving a process and potentially making it more convoluted.

Outside of a few key government functions, a new white paper from the Harvard Ash Center for Democratic Governance and Innovation finds that AI could actually increase the burden of government and muddy-up the functions it is so desperately trying to improve.

Hila Mehr, a Center for Technology and Democracy fellow, explained that there are five key government problems that AI might be able to assist with reasonably: resource allocation, large data sets, expert shortages, predictable scenarios, and procedural and diverse data.

And governments have already started moving into these areas. In Arkansas and North Carolina, chatbots are helping the state connect with its citizens through Facebook. In Utah and Mississippi, Amazon Alexa skills have been introduced to better connect constituents to the information and services they need.

Unlike Hollywood representations of AI in film, Mehr said, the real applications for artificial intelligence in a government organization are generally far from sexy. The administrative aspects of governing are where tools like this will excel.

Where it comes to things like expert shortages, she said she sees AI as a means to support existing staff. In a situation where doctors are struggling to meet the needs of all of their patients, AI could act as a research tool. The same is true of lawyers dealing with thousands of pages of case law. AI could be used as a research assistant.

If youre talking about government offices that are limited in staff and experts," Mehr said, "thats where AI trained on niche issues could come in.

But, she warned, AI is not without its problems, namely making sure that it is not furthering human bias written in during the programming process and played out through the data it is fed. Rather than rely on AI to make critical decisions, she argues that any algorithms and decisions made for or as a result of AI should retain a human component.

We cant rely on them to make decisions, so we need that check, the way we have checks in our democracy, we need to have checks on these systems as well, and thats where the human group or panel of individuals comes in, Mehr said. The way that these systems are trained, you cant always know why they are making the decision they are making, which is why its important to not let that be the final decision because it can be a black box depending on how it is trained and you want to make sure that it is not running on its own.

But past the fear that the technology might disproportionately impact certain citizens or might somehow complicate the larger process, there is the somewhat legitimate fear that the implementation of AI will mean lost jobs. Mehr said its a thought that even she has had.

On the employee side, I think a lot of people view this, rightly so, as something that could replace them," she added. "I worry about that in my own career, but I know that it is even worse for people who might have administrative roles. But I think early studies have shown that youre using AI to help people in their work so that they are spending less time doing repetitive tasks and more time doing the actual work that requires a human touch.

In both her white paper and on the phone, Mehr is careful to advise against going whole hog into AI with the expectation that it can replace costly personnel. Instead she advocates for the technology as a tool to build and supplement the team that already exists.

As for where the technology could run affront of human jobs, Mehr advises that government organizations and businesses alike start considering labor practices in advance.

Inevitably, it will replace some jobs, she said. People need to be looking at fair labor practices now, so that they can anticipate these changes to the market and be prepared for them.

With any blossoming technology, there are barriers to entry and hurdles that must be overcome before a useful tool is in the hands of those best fit to use it. And as with anything, money and resources present a significant challenge but Mehr said large amounts of data are also needed to get AI, especially learning systems, off the ground successfully.

If you are talking about simple automation or [answering] a basic set of questions, it shouldnt take that long. If you are talking about really training an AI system with machine learning, you need a big data set, a very big data set, and you need to train it, not just feed the system data and then its ready to go, she said. The biggest barriers are time and resources, both in the sense of data and trained individuals to do that work.

Link:

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence - Government Technology

Artificial intelligence is coming to medicine don’t be afraid – STAT

A

utomation could replace one-third of U.S. jobs within 15 years. Oxford and Yale experts recently predicted that artificial intelligence could outperform humans in a variety of tasks by 2045, ranging from writing novels to performing surgery and driving vehicles. A little human rage would be a natural response to such unsettling news.

Artificial intelligence (AI) is bringing us to the precipice of an enormous societal shift. We are collectively worrying about what it will mean for people. As a doctor, Im naturally drawn to thinking about AIs impact on the practice of medicine. Ive decided to welcome the coming revolution, believing that it offers a wonderful opportunity for increases in productivity that will transform health care to benefit everyone.

Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Googles AlphaGo AI over the human Go champ. What does that mean for medicine?

advertisement

To date, most AI solutions have solved minor human issues playing a game or helping order a box of detergent. The innovations need to matter more. The true breakthroughs and potential of AI lie in real advancements in human productivity. A McKinsey Global Institute report suggests that AI is helping us approach an unparalleled expansion in productivity that will yield five times the increase introduced by the steam engine and about 1 1/2 times the improvements weve seen from robotics and computers combined. We simply dont have a mental model to comprehend the potential of AI.

Across all industries, an estimated 60 percent of jobs will have 30 percent of their activities automated; about 5 percent of jobs will be 100 percent automated.

What this means for health care is murky right now. Does that 5 percent include doctors? After all, medicine is a series of data points of a knowable nature with clear treatment pathways that could be automated. That premise, though, fantastically overstates and misjudges the capabilities of AI and dangerously oversimplifies the complexity underpinning what physicians do. Realistically, AI will perform many discrete tasks better than humans can which, in turn, will free physicians to focus on accomplishing higher-order tasks.

If you break down the patient-physician interaction, its complexity is immediately obvious. Requirements include empathy, information management, application of expertise in a given context, negotiation with multiple stakeholders, and unpredictable physical response (think of surgery), often with a life on the line. These are not AI-applicable functions.

I mentioned AlphaGo AI beating human experts at the game. The reason this feat was so impressive is due to the high branching factor and complexity of the Go game tree there are an estimated 250 choices per move, permitting estimates of 10 to the 170th different game outcomes. By comparison, chess has a branching factor of 35, with 10 to the 47th different possible game outcomes. Medicine, with its infinite number of moves and outcomes, is decades away from medical approaches safely managed by machines alone.

We still need the human factor.

That said, more than 20 percent of a physicians time is now spent entering data. Since doctors are increasingly overburdened with clerical tasks like electronic health record entry, prior authorizations, and claims management, they have less time to practice medicine, do research, master new technology, and improve their skills. We need a radical enhancement in productivity just to sustain our current health standards, much less move forward. Thoughtfully combining human expertise and automated functionality creates an augmented physician model that scales and advances the expertise of the doctor.

Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) service authorizations, or tapping away behind a computer. The clerical burdens pushed by fickle health care systems onto physicians and other care providers is both unsustainable and a waste of our best and brightest minds. Its the equivalent of asking an airline pilot to manage the ticket counter, count the passengers, handle the standby and upgrade lists, and give the safety demonstrations then fly the plane. AI can help with such support functions.

But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work. Understanding where they can let go to unlock time is essential, as is collaborating with technologists to guide truly useful development.

Perhaps it makes sense to start with automated interpretation of basic labs, dose adjustment for given medications, speech-to-text tools that simplify transcription or document face-to-face interactions, or even automate wound closure. And then move on from there.

It will be important for physicians and patients to engage and help define the evolution of automation in medicine in order to protect patient care. And physicians must be open to how new roles for them can be created by rapidly advancing technology.

If it all sounds a bit dreamy, I offer an instructive footnote about experimentation with AlphaGo AI. The recent game summit proving AlphaGos prowess also demonstrated that human talent increases significantly when paired with AI. This hybrid model of humans and machines working together presents a scalable automation paradigm for medicine, one that creates new tasks and roles for essential medical and technology professionals, increasing the capabilities of the entire field as we move forward.

Physicians should embrace this opportunity rather than fear it. Its time to rage with the machine.

Jack Stockert, M.D., is a managing director and leader of strategy and business development at Health2047, a Silicon Valley-based innovation company.

Trending

White nationalists are flocking to genetic ancestry tests. Some

White nationalists are flocking to genetic ancestry tests. Some dont like what they find

Democrats in Congress to explore creating an expert panel

Democrats in Congress to explore creating an expert panel on Trumps mental health

Trump wasnt always so linguistically challenged. What could explain

Trump wasnt always so linguistically challenged. What could explain the change?

Recommended

A new survey says doctors are warming up to

A new survey says doctors are warming up to single-payer health care

Beyond aggravation: Constipation is an American epidemic

Beyond aggravation: Constipation is an American epidemic

Express Scripts to limit opioids, concerning doctors

Express Scripts to limit opioids, concerning doctors

More:

Artificial intelligence is coming to medicine don't be afraid - STAT

The importance of building ethics into artificial intelligence – Mashable – Mashable

Image: Shutterstock / Willyam Bradberry

By Kriti Sharma2017-08-18 12:48:17 UTC

Elon Musk recently said that the threat of Artificial Intelligence is more dangerous than that of North Koreas nuclear ambitions.

While I dont pretend to be a foreign policy expert, Im confident that Musks commentary oversimplifies things at the very least. And that AI, when defined, built, cultivated and deployed with the right human oversight, has the potential to do significantly more good for the world than harm.

In order to ensure Musks comments stay in the realm of extreme, though, the AI-focused technology community needs to collectively figure out some basic guide rails.

Understand ethical AI and its role in the future of work

A crucial step toward building a secure and thriving AI industry is collectively defining what ethical AI means for people developing the technology and people using it.

At Sage, we define ethical AI as the creation of intelligent machines that work and react like humans, built with the ability to autonomously conduct, support or manage business activity across disciplines in a responsible and accountable way.

At its core, AI is the creation of intelligent machines that think, work and learn like humans. AI should not be a replacement for standard business rules or procedures.

Thats why we believe all AI-driven technology used in the workplace should embody and advance the interests of an individual company, its staff and its consumer base.

Recruit talent that understands AI and its power to address workplace challenges

Companies deal with team changes regularly. Issues arise tied to trust, accountability and personnel behavior that goes against the values of a company or society, in general. In the tech industry alone, sexism, racial bias and other serious, but eradicable trends persist from the C-suite down to the entry-level.

Consequently, the industry should focus on efforts to develop and grow a diverse talent pool that can build AI technologies to enhance business operations and address specific sets of workplace issues, while ensuring that it is accountable.

Employers need to recruit people who understand the importance of applying strict human resources guidelines to AI performing tasks alongside human employees across industries and geographies. AI, for its part, needs to learn how to conduct itself in a work environment and be rewarded for expected behavior to reinforce good habits.

Hopefully, AIs human co-workers including people actually building the technology will learn vital AI management skills, adopt strong ethics and hold themselves more accountable in the process.

Develop AI that runs on data reflecting the diversity of its users

Humans possess inherent social, economic and cultural biases. Its unfortunately core to social fabrics around the world. Therefore, AI offers a chance for the business community to eliminate such biases from their global operations.

The onus is on the tech community to build technology that utilizes data from relevant, trusted sources to embrace a diversity of culture, knowledge, opinions, skills and interactions.

Indeed, AI operating in the business world today performs repetitive tasks well, learns on the job and even incorporates human social norms into its work. However, AI also spends a significant amount of time scouring the web and its own conversational history for additional context that will inform future interactions with human counterparts.

This prevalence of well-trodden data sets and partial information on the internet presents a challenge and an opportunity for AI developers. When built with responsible business and social practices in mind, AI technology has the potential to consistently and ethically deliver products and services to people who need them. And do so without the omnipresent human threat of bias.

Ultimately, we need to create innately diverse AI. As an industry-focused tech community, we must develop effective mechanisms to filter out biases, as well as any negative sentiment in the data that AI learns from to ensure the technology does not perpetuate stereotypes. Unless we build AI using diverse teams, datasets and design, we risk repeating the fundamental inequality of previous industrial revolutions.

Through it all, its important to remember that the respective roles of humans and ethics in AI development is crucial. In fact, I think the shared future of AI and humans depends on them.

Kriti Sharma is the vice president of bots and AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is also the creator of Pegg, the worlds first AI assistant for accounting, with users in 135 countries.

Read the original here:

The importance of building ethics into artificial intelligence - Mashable - Mashable

Google’s New Site Uses Artificial Intelligence to Track Hate Crimes – Fortune

Many observers and commentators were shocked by the violence that erupted at a white supremacist rally in Charlottesville, Virginia, last weekend. But "incidents of hate are actually all too common" in the U.S., according to the journalism nonprofit ProPublica , which has launched a tool to help Americans better understand that reality.

The Documenting Hate News Index , built in partnership with Googles News Lab and the data visualization studio Pitch Interactive, collects news reports on hate incidents and makes them searchable by name, topic, and date.

The site is certainly eye-opening, and grim.

More than just a list, the site allows hate-related stories to be browsed by date, and shows fluctuations in overall reports of hate crimes over time. Astonishingly, while the violence in Charlottesville captured headlines, last weekend was not a peak for U.S. hate crimes a much broader wave crested in late May, when crimes included two fatalities in an anti-Muslim attack in Portland, a teacher ripping off a young students hijab, and the killing of a young black Army lieutenant by a white supremacist .

Get Data Sheet , Fortunes technology newsletter.

According to Google News Labs announcement , the site uses machine learning specifically, Googles Natural Language API to understand both the content of news reports about hate crimes, and subtler things like intent and sentiment. That means it can detect stories about events "suggestive of hate crime, bias, or abuse" and track the frequency of particular names, places, and more general keywords like "businessman" and "nationalists."

Currently, "Donald Trump" is the highest-ranking keyword associated with incidents of hate.

The project is important because, according to ProPublicas larger Documenting Hate project, there is no reliable national database of hate crimes. The Index is primarily intended to help journalists, researchers, and civil rights organizations get a broader view on the national situation.

Read the rest here:

Google's New Site Uses Artificial Intelligence to Track Hate Crimes - Fortune

Exploring the human side of artificial intelligence at PHLAI – Technical.ly Philly

It may be time to stop talking about machine learning and artificial intelligence as technologies of the future.

On Tuesday, Aug. 15, engineers and developers from across the region shared how these technologies are at work today, dramatically transforming products, operations, business processes and, most importantly, the customer experience.

Sponsored by Comcast, PHLAIbrought together hundreds of technologists for a full day of tech talks and networking focused on machine learning and AI (heres our preview piece from July). More than 20 speakers filled three tracks on operations, product and business intelligence with subjects ranging from using deep learning to prevent human trafficking; to the technology behind Pinterest recommendations; to Comcasts own X1 Voice Remote and the machine-learning that powers our homegrown Natural Language Processing platform.

While the applications for these technologies span the widest possible range of uses, the unifying theme throughout the PHLAI talks was that artificial intelligence and machine learning have already deeply transformed the way we do business and deliver service to customers, and that transformation is only accelerating.

Comcast Chief Product Officer Chris Satchell kicked off the day sharing some lessons from his early background developing AI for video games, including the critical importance of factoring customer behavior and attitudes into decisions made in the development process. Its fine for AI systems to be complex, Satchell said, but if a customer cant build a mental model of whats happening in the background, they tend to dismiss it as random.

Dave Ward, CTO of Engineering and Chief Architect at Cisco, painted a compelling picture of how machine learning and AI applications arent just growing but also converging as the boundaries between traditional AI domains like security, automation, Internet of Things and marketing begin to dissolve. Machine learning and AI arent magical algorithms, Ward said, but the reality may be much more exciting, as these technologies deliver real customer value and become increasingly mainstream.

Another key takeaway from the event was the importance of human intervention at every step of the AI and machine learning implementation process. Even mature AI and machine learning systems will learn bad behaviors (false positives, etc.) and must be properly operated to yield the best result. Things that are hard, or even impossible for the human mind can be trivial for AI, but the reverse is also true. The best AI outcomes come from people and machines working together.

One thing that was on full display at PHLAI was the vast breadth of engineering leadership coming out of Philadelphia. While participants and speakers came from as far away as California, the vast majority were based here.

PHLAI is the third in a series of one-day technical conferences that we kicked off in January with Scala by the Schuylkill. Response to these events has been incredible. Our final event of the year, which focuses on the craft of software development, is already in the works, and were planning another full slate of technical conferences right here throughout 2018.

In just these three events, we confirmed something we already knew: the technology and engineering community in Philadelphia is amazing, diverse and growing, and that we are capable of great things when we come together.

Were looking forward for many more events to come. Keep an eye on the Comcast Labs blog for news about upcoming conferences and how to register.

Jeanine Heck serves as Executive Director in the Technology and Product organization of Comcast Cable. In this role, Heck brings artificial intelligence into XFINITY products. She was the founding product manager for the X1 voice remote, has led the launch of a TV search engine, and managed the companys first TV recommendations engine.

See the original post:

Exploring the human side of artificial intelligence at PHLAI - Technical.ly Philly

Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction. – WIRED

Skip Article Header. Skip to: Start of Article.

I sat down with Kevin Systrom, the CEO of Instagram, in June to interview him for my feature story, Instagrams CEO Wants to Clean Up the Internet, and for Is Instagram Going Too Far to Protect Our Feelings, a special that ran on CBS this week.

It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. He also discusses free speech, the possibility of Instagram becoming too bland, and whether the platform can be considered addictive. Our conversation occurred shortly before Instagram introduced the AI to the public.

A transcript of the conversation follows.

Nicholas Thompson, Editor-in-Chief: Morning, Kevin

Kevin Systrom, CEO of Instagram: Morning! How are you?

NT: Doing great. So what I want to do in this story is I want to get into the specifics of the new product launch and the new things youre doing and the stuff thats coming out right now and the machine learning. But I also want to tie it to a broader story about Instagram, and how you decided to prioritize niceness and how it became such a big thing for you and how you reoriented the whole company. So Im gonna ask you some questions about the specific products and then some bigger questions

KS: Im down.

NT: All right so lets start at the beginning. I know that from the very beginning you cared a lot about comments. You cared a lot about niceness and, in fact, you and your co-founder Mike Krieger would go in early on and delete comments yourself. Tell me about that.

KS: Yeah. Not only would we delete comments but we did the unthinkable: We actually removed accounts that were being not so nice to people.

NT: So for example, whom?

KS: Yeah well I dont remember exactly whom, but the back story is my wife is one of the nicest people youll ever meet. And that bleeds over to me and I try to model it. So, when we were starting the app, we watched this video, basically how to start a company. And it was by this guy who started the LOLCats meme and he basically said, To form a community you need to do something, and he called it Prune the trolls. And Nicole would always joke with me, shes like, Hey listen, when your community is getting rough, you gotta prune the trolls. And thats something she still says to me today to remind me of the importance of community, but also how important it is to be nice. So back in the day we would go in and if people were mistreating people, wed just remove their accounts. I think that set an early tone for the community to be nice and be welcoming.

NT: But whats interesting is that this is 2010, and 2010 is a moment where a lot of people are talking about free speech and the internet, and Twitters role in the Iranian revolution. So it was a moment where free speech was actually valued on the internet, probably more than it is now. How did you end up being more in the prune the trolls camp?

KS: Well theres an age-old debate between free speechwhat is the limit of free speech, and is it free speech to just be mean to someone? And I think if you look at the history of the law around free speech, youll find that generally theres a line where you dont want to cross because youre starting to be aggressive or be mean or racist. And you get to a point where you wanna make sure that in a closed community thats trying to grow and thrive, you make sure that you actually optimize for overall free speech. So if I dont feel like I can be myself, if I dont feel like I can express myself because if I do that, I will get attacked, thats not a community we want to create. So we just decided to be on the side of making sure that we optimized for speech that was expressive and felt like you had the freedom to be yourself.

NT: So, one of the foundational decisions at Instagram that helped make it nicer than some of your peers, was the decision to not allow re-sharing, and to not allow something that I put out there to be kind of appropriated by someone else and sent out into the world by someone else. How was that decision made and were there other foundational design and product decisions that were made because of niceness?

KS: We debate the re-share thing a lot. Because obviously people love the idea of re-sharing content that they find. Instagram is full of awesome stuff. In fact, one of the main ways people communicate over Instagram Direct now is actually they share content that they find on Instagram. So thats been a debate over and over again. But really that decision is about keeping your feed focused on the people you know rather than the people you know finding other stuff for you to see. And I think that is more of a testament of our focus on authenticity and on the connections you actually have than about anything else.

NT: So after you went to VidCon, you posted an image on your Instagram feed of you and a bunch of celebrities

KS: Totally, in fact it was a Boomerang.

NT: It was a Boomerang, right! So Im going to read some of the comments on @kevins post.

KS: Sure.

NT: These are the comments: Succ, Succ, Succ me, Succ, Can you make Instagram have auto-scroll feature? That would be awesome and expand Instagram as a app that could grow even more, #memelivesmatter, you succ, you can delete memes but not cancer patients, I love #memelivesmatter, #allmemesmatter, succ, #MLM, #memerevolution, cuck, mem, #stopthememegenocide, #makeinstagramgreatagain, #memelivesmatter, #memelivesmatter, mmm, gang, melon gangIm not quite sure what all this means. Is this typical?

KS: It was typical, but Id encourage you to go to my last post which I posted for Fathers Day

NT: Your last post is all nice!

KS: Its all nice.

NT: Theyre all about how handsome your father is.

KS: Right? Listen, he is taken. My mom is wonderful. But there are a lot of really wonderful comments there.

NT: So why is this post from a year ago full of cuck and #memelivesmatter and the most recent post is full of how handsome Kevin Systroms dad is?

KS: Well thats a good question. I would love to be able to explain it, but the first thing I think is back then there were a bunch of people who I think were unhappy about the way Instagram was managing accounts. And there are groups of people that like to get together and band up and bully people, but its a good example of how someone can get bullied, right. The good news is I run the company and I have a thick skin and I can deal with it. But imagine youre someone whos trying to express yourself about depression or anxiety or body image issues and you get that. Does that make you want to come back and post on the platform? And if youre seeing that, does that make you want to be open about those issues as well? No. So a year ago I think we had much more of a problem, but the focus over that year, over both comment filtering so now you can go in and enter your own words that basically filter out comments that include that word. We have spam filtering that works pretty well, so probably a bunch of those would have been caught up in the spam filter that we have because they were repeated comments. And also just a general awareness of kind comments. We have this awesome campaign that we started called #kindcomments. I dont know if you know the late night show were they reads off mean comments on another social platform; we started kind comments to basically set a standard in the community that it was better and cooler to actually leave kind comments. And now there is this amazing meme that has spread throughout Instagram about leaving kind comments. But you can see the marked difference between the post about Fathers Day and that post a year ago on what technology can do to create a kinder community. And i think were making progress which is the important part.

NT: Tell me about sort of steps one, two, three, four, five. How do you you dont automatically decide to launch the seventeen things youve launched since then? Tell me about the early conversations.

KS: The early conversations were really about what problem are we solving and we looked to the community for stories. We talked to community members. We have a giant community team here at Instagram, which I think is pretty unique for technology companies. Literally, their job is to interface with the community and get feedback and highlight members who are doing amazing things on the platform. So getting that type of feedback from the community about what types of problems they were experiencing in their comments then led us to brainstorm about all the different things we could build. And what we realized was there was this giant wave of machine learning and artificial intelligenceand Facebook had developed this thing that basicallyits called deep text

NT: Which launches in June of 2016, so its right there.

KS: Yup, so they have this technology and we put two and two together and we said: You know what? I think if we get a bunch of people to look at comments and rate them good or badlike you go on pandora and you listen to a song, is it good or is it badget a bunch of people to do that. Thats your training set. And then what you do is you feed it to the machine learning system and you let it go through 80 percent of it and then you hold out the other 20 percent of the comments. And then you say, Okay, machine, go and rate these comments for us based on the training set, and then we see how well it does and we tweak it over time, and now were at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracybasically a 1 percent false positive rate. So throughout that process of brainstorming, looking at the technology available and then training this filter over time with real humans who are deciding this stuff, gathering feedback from our community and gathering feedback from our team about how it works, were able to create something were really proud of.

NT: So when you launch it you make a very important decision: Do you want it to be aggressive, in which case itll probably knock out some stuff it shouldnt? Or do you want it to be a little less aggressive, in which case a lot of bad stuff will get through?

KS: Yeah, this is the classic problem. If you go for accuracy, you will misclassify a bunch of stuff that actually was pretty good. So you know if your my friend and I go on your photo and Im just joking around with you and giving you a hard time, Instagram should let that through because were friends and Im just giving you a hard time and thats a funny banter back and forth. Whereas if you dont know me and I come on and I make fun of your photo, that feels very different. Understanding the nuance between those two is super important and the thing we dont want to do is have any instance where we block something that shouldnt be blocked. The reality is its going to happen. So the question is, is that margin of error worth it for all the really bad stuff that gets blocked? And thats a fine balance to figure out. Thats something were working on. We trained the filter basically to have a one-percent false positive rate. So that means one percent of things that get marked as bad are actually good. And that was a top priority for us because were not here to curb free speech, were not here to curb fun conversations between friends, but we want to make sure we are largely attacking the problem of bad comments on Instagram.

NT: And so you go, and every comment that goes in gets sort of run through an algorithm, and the algorithm gives it a score from 0 to 1 on whether its likely a comment that should be filtered or a comment that should not be filtered, right? And then that score is combined with the relationship of the two people?

KS: No, the score actually is influenced based on the relationship of the people

NT: So the original score is influenced by, and Instagram I believeif I have this correcthas something like a karma score for every user, where the number of times theyve been flagged or the number of critiques made of them is added into something on the back end, is that goes into this too?

KS: So without getting into the magic sauceyoure asking like Coca Cola to give up its recipeIm going to tell you that theres a lot of complicated stuff that goes into it. But basically it looks at the words, it looks at our relationship, and it looks at a bunch of other signals including account age, account history, and that kind of stuff. And it combines all those signals and then it spits out a score of 0 to 1 about how bad this comment is likely. And then basically you set a threshold that optimizes for one-percent false-positive rate.

NT: when do you decide its ready to go?

KS: I think at a point where the accuracy gets to a point that internally were happy with it. So one of the things we do here at instagram is we do this thing called dogfoodingand not a lot of people know this term but in the tech industry it means, you know, eat your own dog food. So what we do is we take the products and we always apply them to ourselves before we go out to the community. And there are these amazing groups on Instagramand I would love to take you through them but theyre actually all confidential but its employees giving feedback about how they feel about specific features.

NT: So this is live on the phone to a bunch of Instagram employees right now?

KS: There are always features that are not launched that are live on Instagram employees phones, including things like this.

NT: So theres a critique of a lot of the advances in machine learning that the corpus on which it is based has biases built into it. So DeepText analyzed all Facebook commentsanalyzed some massive corpus of words that people have typed into the internet. When you analyze those, you get certain biases built into them. So for example, I was reading a paper and someone had taken a corpus of text and created a machine learning algorithm to rank restaurants, and to look at the comments people had written under restaurants and then to try and guess the quality of the restaurants. He went through and he ran it, and he was like, Interesting, because all of the Mexican restaurants were ranked badly. So why is that? Well it turns out, as he dug deeper into the algorithm, its because in massive corpus of text the word Mexican is associated with illegalillegal Mexican immigrant because that is used so frequently. And so there are lots of slurs attached to the word Mexican, so the word Mexican has negative connotations in the machine learning-based corpus, which then affects the restaurant rankings of Mexican restaurants.

KS: That sounds awful

NT: So how do you deal with that?

KS: Well the good news is were not in the business of ranking restaurants

NT: But you are ranking sentences based on this huge corpus of text that Facebook has analyzed as part of DeepText

KS: Its a little bit more complicated than that. So all of our training comes from Instagram comments. So we have hundreds of raters and its actually pretty interesting what weve done with this set of raters: basically, human beings that sit there and by the way human beings are not unbiased thats not what im claimingbut you have human beings. Each of those raters is bilingual. So they speak two languages, they have a diverse perpsective, theyre from all over the world. And they rank those comments basically, thumbs up or thumbs down. Basically the instagram corpus, right?

So you feed it a thumbs up, thumbs down based on an individual. And you might say, But wait, isnt a single individual biased in some way? Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then were able to tweak things on the margin to make sure things like that dont happen. Im not claiming that it wont happenthats of course a riskbut the biggest risk of all is doing nothing because were afraid of these things happening. And I think its more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.

NT: So lets take a sentence like These hos aint loyal, which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, Oh thats a lyric, therefore its okay, some people wont know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and These hoes aint loyal, I can post that on your Instagram feed if you post a picture which deserves that comment.

KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say thats a mean spirited comment to any of us, right?

NT: Right.

NT: So I think thats pretty easy to get to. I think if there are more nuance in examples, and I think thats the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that its far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesnt work, well scrap it and start over with something new. But the whole idea here is that were trying something. And I think a lot of the fears that youre bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.

NT: And so first youre going to launch this filtering bad comments, and then the second thing youre going to do is the elevation of positive comments. Tell me about how that is going to work and why thats a priority.

KS: The elevation of positive comments is more about modeling in the system. Weve seen a bunch of times in the system where we have this thing called the mimicry effect. So if you raise kind comments, you actually see more kind comments, or you see more people giving kind comments. its not that we ever ran this test but Im sure if you raised a bunch of mean comments you would see more mean comments. Part of this is the piling-on effect, and I think what we can do is by modeling what great conversations are, more people will see Instagram as a place for that, and less for the bad stuff. And its got this interesting psychological effect that people want to fit in and people want to do what theyre seeing, and that means that people are more positive over time.

NT: And are you at all worried that youre going to turn Instagram into the equivalent of an East Coast liberal arts college?

KS: I think those of us who grew up on the East Coast might take offense to that *laughs* Im not sure what you mean exactly.

NT: I mean a place where there are trigger warnings everywhere, where people feel like like they cant have certain opinions, where people feel like they cant say things. Where you put this sheen over all your conversations, as though everything in the world is rosy and the bad stuff, were just going to sweep it under the rug.

KS: Yeah, that would be bad. Thats not something we want. I think in the range of bad, were talking about the lower five percent. Like the really, really, bad stuff. I dont think were trying to play anywhere in the area of grey. Although I realize, theres no black or white and were going to have to play at some level. But the idea here is to take out, I dont know, the bottom five percent of nasty stuff. And I dont think anyone would argue that, that makes Instagram a rosy place, it just doesnt make it a hateful place.

So you feed it a thumbs up, thumbs down based on an individual. And you might say, But wait, isnt a single individual biased in some way? Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then were able to tweak things on the margin to make sure things like that dont happen. Im not claiming that it wont happenthats of course a riskbut the biggest risk of all is doing nothing because were afraid of these things happening. And I think its more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.

NT: So lets take a sentence like These hos aint loyal, which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, Oh thats a lyric, therefore its okay, some people wont know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and These hoes aint loyal, I can post that on your Instagram feed if you post a picture which deserves that comment.

KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say thats a mean spirited comment to any of us, right?

NT: Right.

NT: So I think thats pretty easy to get to. I think if there are more nuance in examples, and I think thats the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that its far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesnt work, well scrap it and start over with something new. But the whole idea here is that were trying something. And I think a lot of the fears that youre bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.

NT: And you wouldnt want all of the comments on your,You know, on your VidCon post, its a mix of sort of jokes, and nastiness, and vapidity, and useful product feedback. And youre getting rid of the nasty stuff, but wouldnt it be better, if you raised like the best product feedback and the funny jokes to the top?

KS: Maybe. And maybe thats a problem well decide to solve at some point. But right now were just focused on making sure that people dont feel hate, you know? And I think thats a valid thing to go after, and Im excited to do it.

NT: So the thing that interests me the most is that its like Instagram is a world with 700 million people, and youre writing the constitution for the world. When you get up in the morning and you think about that power, that responsibility, how does it affect you?

KS: Doing nothing felt like the worst option in thew world. So starting to tackle it means that we can improve the world; we can improve the lives of as many young people in the world that live on social media. I dont have kids yet; I will someday, and I hope that kid, boy or girl, grows up in a world where they feel safe online, where I as a parent feel like theyre safe online. And you know the cheesy saying, with great power comes great responsibility. We take on that responsibility. And were going to go after it. But that doesnt mean that not acting is the correct option. There are all sorts of issues that come with acting, youve highlighted a number of them today, but that doesnt mean we shouldnt act. That just means we should be aware of them and we should be monitoring them over time.

NT: One of the critiques is that Instagram, particularly for young people is very addictive. And in fact theres a critique being made my Tristen Harris who was a-classmate of yours, and a classmate of Mikes, and a student in the same class as Mikes. And he says that the design of Instagram deliberately addicts you. For example, when you open it up it just- KS: Sorry Im laughing just because I think the idea that anyone inside here tries to design something that is maliciously addictive is just so far fetched. We try to solve problems for people and if by solving those problems for people they like to use the product, I think weve done our job well. This is not a casino, we are not trying to eke money out of people in a malicious way. The idea of Instagram is that we create something that allows them to connect with their friends, and their family, and their interests, positive experiences, and I think any criticism of building that system is unfounded.

NT: So all of this is aimed at making Instagram better. And it sounds like changes so far have made Instagram better. Is any of it aimed at making people better, or is there any chance that the changes that happen on Instagram will seep into the real world and maybe, just a little bit, the conversations in this country will be more positive than theyve been?

KS: I sure hope we can stem any negativity in the world. Im not sure we would sign up from that day one. Um, but I actually want to challenge the initial premise which is that this is about making Instagram better. I actually think its about making the internet better. I hope someday the technology that we develop and the training sets we develop and the things we learn we can pass on to startups, we can pass on our peers in technology, and we actually together build a kinder, safer, more inclusive community online.

NT: Will you open source the software youve built for this?

KS: Im not sure. Im not sure. I think a lot of it comes back to how good it performs, and the willingness of our partners to adopt it.

NT: But what if this fails? What if actually people actually get kind of turned off by instagram, they say, Instagrams becoming like Disneyland, I dont want to be there. And they share less?

KS: The thing I love about Silicon Valley is weve bear hugged failure. Failure is what we all start with, we go through, and hopefully we dont end on, on our way to success. I mean Instagram wasnt Instagram initially. It was a failed start up before. I turned down a bunch of job offers that would have been really awesome along the way. That was failure. Ive had numerous product ideas at Instagram that were totally failures. And thats okay. We bear hug it because when you fail at least youre trying. And I think thats actually what makes Silicon Valley different from traditional business. Is that our tolerance for failure here is so much higher. And thats why you see bigger risks and also bigger payoffs.

Original post:

Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction. - WIRED

Beyond the Hype in AI: Implementing and Maintaining Artificial Intelligence FTW – Bloomberg Big Law Business

rG/q/")@$a @KneE;1Opb{eVUwB)^9t5+o~xwZ=~98'ewng`hL|q`l&;^G'#+K?l{v.R',+{Zmwo;e+9L+_o.Ya 'O___Zx>!2e'^%u3C]f?~xZ?+_N2Utk_1n1cEU>{qe9KgOat?_;V|]~X/_[SbyzbVj,6eeVVWqlp~+ [9lJJr?+<-E=;fyy^much]Vn={(>;581V"IOcwwWKGHsPX+dV|6`D>S6?Z-?DTSY?y+z!y"+KJAM:y@dl;+/?ww+6tm~ahUxMDu"5?XM^_3B!"=!O5_X@^C,Ad^v:/[vM lTgF-o&i&ObY+Zs<Q3nZW$gye']XO,C W*BQH1%JhVzxQ4mg?kfom+eZe iDyoRwk)u!$.,CIBC%1*!0iE-nz!n_TY ~M~ed|>/9!A! WX&Cc +q_C9X4. xX*BXv%WRPSg&48],DU8Z`T'`LNL~t`8+Ed"V>N'c]ajdy- }|d%rNAilO |`JVXBr aOn].@([i{)hqe~1HVi!VV I& 0Sf'oj;-Ca@~3ZQ;Id5-;4kML ':Uo=ANh'Y[KA7J}$I.vNR75@+^D|=NK &I%6o]P)&LdZ~Xalv =;$FJ!.h?#RU2z~sB<9_38PQ!0vgp%$m +Me Cs&>cKMfyG>:No[P(P?vF}]mi?{(4?m/[`&B=kY mQ&fCZ].>se-K4?B=l=o'-{B,wl0D:97vnq0qF*s6~94/}wJzf!p$cYv=9A%xJFF= Gf{?]0nH u q`/ xgd1@3F4cC ;B2H$c`HC1:Ed}_[>'g}tk 5m _-_0`zL"09cm;NGbD+0Zj-f[d3[6X|sVjCX ;K"k9kH/_gsO[KH

See the rest here:

Beyond the Hype in AI: Implementing and Maintaining Artificial Intelligence FTW - Bloomberg Big Law Business

The Ethics of Artificial Intelligence – HuffPost

Many experts believe that artificial intelligence (AI) might lead to the end of the worldjust not in the way that Hollywood films would have us believe. Movie plots, for example, feature robots increasing in intelligence until they take over the human race. The reality is far less dramatic, but may cause some incredible cultural shiftsnonetheless.

Last year, industry leaders like Elon Musk, Stephen Hawking, and Bill Gates wrote a letter to the International Joint Conference in Argentina stating that the successful adoption of AI might be one of humankinds biggest achievementsand maybe its last. They noted that AI poses unique ethical dilemmas, whichif not considered carefullycould prove more dangerous than nuclear capabilities.

How can we implement AI technology while remaining faithful to our ethical obligations? The solution requires systematic effort.

Transparency is the key to integrating AI effectively. Companies may mistakenly assume that ethics is merely a practice in risk mitigation. This mindset only serves to deadlock innovation.

Create a company ethics committee that works with your shareholders to determine whats ethical and whats not from the outset. Align this moral code with your business cultural values to create innovative products while increasing public trust. An ethics committee member should participate in the design and development stages of all new products, including anything that incorporates AI. Integrity is essential to the foundation of an organization. Your ethical mindset must therefore be proactive, not reactive.

A solid ethical foundation leads to good business decisions. It wouldnt make sense, for example, to build a product that you later determine will affect the industry negatively. By applying your ethical code from the start, you create a positive impact while wisely allocating resources.

An ethics committee, however, doesnt tell a design and development team what it can and cant do. Instead, the committee encourages the team to pursue innovation without infringing on the companys cultural values. Think of it as an important system of checks and balances; one department may be so focused on the potential of a new innovation that members of the department never pause to consider the larger ramifications. An ethics committee can preserve your business integrity in light of exciting new developments that have the potential to completely reshape your organization.

AI is still a relatively new concept, so its possible to do something legal, yet unethical. Ethical conversations are more than just a checklist for team members to follow. They require hard questions and introspection about new products and the companys intentions. This Socratic method takes time and may create tension between team membersbut its worth the effort.

Dont know where to begin with your ethical code? Start by reading the One Hundred Year Study on Artificial Intelligence from Stanford. This report reviews the impact of AI on culture in five-year timespans, outlines societys opportunities and challenges in light of AI innovation, and envisions future changes. Its intended to guide decision-making and policy-making to ensure AI benefits humankind as a whole.

Use this report as an informed framework for your AI initiatives. Other ethical framework essentials include:

One tech industry concern is that failure to self-police will only lead to external regulation. The Stanford report maintains it will be impossible to adequately regulate AI. Risks and opportunities vary in scope and domain. While the tech industry balks at the idea of oversight, the Stanford report suggests that all levels of government should be more aware of AIs potential.

A committee of tech leaders plans to convene this month to discuss the ethics of AI intelligence, and the possibility of creating a large-scale best practices guide for companies to follow. The hope? That discussion will breed introspection, leading all AI companies to make ethical decisions benefitting society. The process will take time, and tech companies are notoriously competitive. But in this we universally agree: its worth the effort.

Article first seen on Futurum here. Photo Credit: HoursDeOuvre via Compfight cc

The Morning Email

Wake up to the day's most important news.

Read more here:

The Ethics of Artificial Intelligence - HuffPost

Artificial intelligence takes aim at online fraud – Idaho Statesman (blog)


Idaho Statesman (blog)
Artificial intelligence takes aim at online fraud
Idaho Statesman (blog)
There was a time, not long ago, when consumers cast a wary eye the product quality, uncertain customer service and, most of all, the security of online transactions. Today, we shop online for everything from cars to medications to movie tickets to ...

Go here to see the original:

Artificial intelligence takes aim at online fraud - Idaho Statesman (blog)

It’s a bird! It’s a plane! It’s Microsoft using artificial intelligence to teach a machine to stay aloft – GeekWire

Microsofts autonomous glider soars through the air above Hawthorne, Nev. Once airborne, the glider uses artificial intelligence to find and rely on thermals, or columns of air that rise due to heat, to stay aloft. (Microsoft Photo / John Brecher)

Paying attention to the rise of the machines increasingly means scanning the skies for things other than conventional aircraft or birds. But what if the line between the two begins to blur and autonomous planes can somehow be taught to mimic nature?

Thats the hope of researchers from Microsoft who are using artificial intelligence to keep a sailplane aloft without the help of a motor. A new report on the Redmond, Wash.-based tech giants website details the efforts of scientists launching test flights in a Nevada desert.

The researchers have found that through a complex set of AI algorithms, they can get their 16 1/2-foot, 12 1/2-pound aircraft to soar much like a hawk would, byidentifying things like air temperature and wind direction to locatethermals invisible columns of air that rise due to heat.

Birds do this seamlessly, and all theyre doing is harnessing nature. And they do it with a peanut-sized brain, Ashish Kapoor, a principal researcher at Microsoft, said in the report.

Kapoor said its probably one of the few AI systems operating in the real world thats not only making predictions but also taking action based on those predictions. He said the planes could eventually be used for such things as monitoring crops in rural areas or providing mobile Internet service in hard-to-reach places.

Beyond those practical tasks,Andrey Kolobov, the Microsoft researcher in charge of the projects research and engineering efforts, said the sailplane is charting a course for how intelligent learning itself will evolve over the coming years, calling the project a testbed for intelligent technologies. Its becoming increasingly important for systems of all kinds to make complex decisions based on a number of variables without making costly or dangerous mistakes.

Read more about what Microsoft is learning this summer in the desert via the story from the companys News Center.

Read the original post:

It's a bird! It's a plane! It's Microsoft using artificial intelligence to teach a machine to stay aloft - GeekWire

The artificial Intelligence wave is upon us. We better be prepared – Hindustan Times

The AI (artificial intelligence) revolution is well and truly upon us, and we are at a significant watershed moment in our lives where AI could become the new electricity pervasive and touching every aspect of our life. While many industries including healthcare, education, retail and banks have already started adopting AI in key business aspects, there are also new business models which are predicated on AI.

With the global market of AI expected to grow at 36% annually, reaching a valuation of $3 trillion by 2025 from $126 bn in 2015, new age disruption is not only redefining the way traditional businesses are run, but is also unfolding as a new factor of production.

However, the fear of what might happen once AI evolves into artificial general intelligence which can perform any intellectual task that a human can do has now taken centre stage with the ongoing debate between two tech titans Elon Musk and Mark Zuckerberg. Similarly, Microsoft co-founder Bill Gates had also voiced his views that in a few years, AI would have evolved enough to warrant wide attention, while Facebook has ended up shutting down one of its AI projects as chatbots had developed their own language (unintelligible to humans) to communicate.

Beyond this, the common citizen wants to know if she should be worried about AI taking away her job? This calls for broader thinking, including the evolution of industry protocols, while making sure that the public is ready for these futuristic advancements.

Will AI move my cheese?

The emergence of AI has seen criticism because of the probability that it could replace human jobs by automation. However, as we see the shift of AI from R&D stage to various real-life business prototypes, it seems evident that goal of most AI applications is to augment human abilities through hybrid business models.

According to McKinsey, AI would raise global labour productivity by 0.8% to 1.4% a year between now and 2065. I believe that both policy makers and corporates must recognise AIs potential to empower the workforce and invest in creating training programmes/workshops to help the labour force adapt to these newer models.

For instance, Ocado, the UK online supermarket has embedded robotics at the core of warehouse management. Robots steer thousands of product-filled bins to human packers just in time to fill shopping bags which are then sent to delivery vans whose drivers use AI applications to pick the best route based on traffic conditions and weather.

Technology will create more new jobs than it eliminates

We must learn from the history of the industrial and technological revolutions over the last 500 years that jobs eliminated in one sector have been replaced by newer jobs requiring refreshed skill-sets. As a corollary, countries such as Japan, Korea or Germany, which have the highest levels of automation, should have seen large scale unemployment over the past 4-5 decades. This is not necessarily the case.

Having said that, in the near future, every routine operational task is certainly likely to become digitised and AI could be running the back-office of most businesses. Over the next few decades, many middle skill jobs are also likely to be eliminated. However, AI is unlikely to replace jobs which require human to human interaction. Consequently, fundamental human thinking skills such as entrepreneurship, strategic thinking, social leadership, connected salesmanship, philosophy, and empathy, among others, would be in even greater demand.

Further, till a point of singularity is reached, AI will not be able to service or program on its own leading to new, high-skilled jobs for technicians and computing experts.

Lets be prepared

Globally, policymakers and corporations will need to significantly revamp the education system to address technology gaps.

In India, this represents an enormous opportunity for policymakers to make better informed decisions, tackle some of the toughest socio-economic challenges, and address the woeful shortage of qualified doctors, teachers etc.

We need to immediately plan for state and nation-wide university hubs, and MOOCs (massive open online courses) built on the framework of DICE (design, innovation, creativity led entrepreneurship). Curricula should be focussed on developing basic skills in STEM (science, technology , engineering and mathematics) fields, coupled with a new emphasis on creativity, critical and strategic thinking. Adaptive and individualised learning systems need to be established to help students at different levels work collaboratively amongst themselves as well as with AI in the classroom.

The National Skills Development Corporation will need to evolve into National Future Skills Development, as we as a civil society prepare to bring the future into the present!

Rana Kapoor is MD and CEO, YES Bank; and Chairman, YES Global Institute

The views expressed are personal

Go here to read the rest:

The artificial Intelligence wave is upon us. We better be prepared - Hindustan Times

How Artificial Intelligence is reshaping art and music – The Hindu

In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls punk-influenced bluegrass Johnny Rotten crossed with Johnny Cash. But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. My only goal in life was to mix AI and music, Mr. Eck said.

It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gdel, Escher, Bach: An Eternal Golden Braid . Mr. Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.

But during the next two decades, working on the fringe of academia, Mr. Eck kept chasing the idea, and eventually, the AI caught up with his ambition.

Last spring, a few years after taking a research job at Google, Mr. Eck pitched the same idea he pitched to Mr. Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art, including sketches, videos and jokes.

With its empire of smartphones, apps and internet services, Google is in the business of communication, and Mr. Eck sees Magenta as a natural extension of this work. Its about creating new ways for people to communicate, he said during a recent interview inside the small two-story building here that serves as headquarters for Google AI research.

Growing effort

The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behaviour by analysing vast amounts of data.

By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognise a bike. This is how Facebook identifies faces in online photos, how Android phones recognise commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analysing a set of songs, for instance, they can learn to build similar sounds.

As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.

Tools for artists

But that end game is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building a software that does much the same thing. Using neural networks, he and his team are cross-breeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.

Much as a neural network can learn to identify a cat by analysing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analysing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one.

Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47% bassoon and 53% clavichord. Another might switch the percentages. And so on.

For centuries, orchestral conductors have layered sounds from instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team combine them to form something that did not exist before, creating new ways that artists can work.NYT

Read this article:

How Artificial Intelligence is reshaping art and music - The Hindu