12345...102030...

Salesforce Einstein is Artificial Intelligence in Business …

Einstein is like having your own data scientist dedicated to bringing AI to every customer relationship. It learns from all your data CRM data, email, calendar, social, ERP, and IoT and delivers predictions and recommendations in context of what youre trying to do. In some cases, it even automates tasks for you. So you can make smarter decisions with confidence and focus more attention on your customers at every touchpoint.

Read more from the original source:

Salesforce Einstein is Artificial Intelligence in Business …

History of artificial intelligence – Wikipedia

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an “AI winter”. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry. As in previous “AI summers”, some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[4] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[5] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[8]Hero of Alexandria,[9]Al-Jazari and Wolfgang von Kempelen.[11] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[12][13]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[14]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[19]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[20] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[14] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[14][21]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[14][23]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[24] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[25] and developed by John von Neumann.[26]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[27]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[28]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[29] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[30]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[32] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[33]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[34]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[35]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[36] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[37] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[38]

The Dartmouth Conference of 1956[39] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[40] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[41] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[42] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[44] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[48]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[49]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[50] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[53]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[54] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[55]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[56]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[57]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[58]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[63]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[64] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[65] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[66]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[67] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[68] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[69] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[70] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[71]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[72] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[73]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[81] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[82] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[83]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[84] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[85] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[86]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[87]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[88][89]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[90]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[91] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[92]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[93]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[94]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[70]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[95] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[96] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[97] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[98]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[99] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[100]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[101]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[102]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[103] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[104]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[105]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[106] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[107]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[108] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[109]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[110]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[111]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[112]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[113] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[114]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[115][116]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[117]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[116][118]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[116][119]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[120] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[121]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[122]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[123]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[124] As with other AI projects, expectations had run much higher than what was actually possible.[124]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[125] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[126]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[127]

In a 1990 paper, “Elephants Don’t Play Chess,”[128] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[129] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[130]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[131] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[132] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[133]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[134] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[135] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[136]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[137] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[138] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[139] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[140] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[141] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[142]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[141][143]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[144] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[145][146]

Judea Pearl’s highly influential 1988 book[147] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[145]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[148] and their solutions proved to be useful throughout the technology industry,[149] such as data mining, industrial robotics, logistics,[150]speech recognition,[151] banking software,[152] medical diagnosis[152] and Google’s search engine.[153]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[154]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[155]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[156][157][158]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[159]

In 2001, AI founder Marvin Minsky asked “So the question is why didn’t we get HAL in 2001?”[160] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[161] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicted that machines with human-level intelligence will appear by 2029.[162]Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[163] There were many other explanations and for each there was a corresponding research program underway.

In the first decades of the 21st century, access to large amounts of data (known as “big data”), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. By 2016, the market for AI related products, hardware and software reached more than 8 billion dollars and the New York Times reported that interest in AI had reached a “frenzy”.[164]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.

Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action.

.

See original here:

History of artificial intelligence – Wikipedia

Artificial Intelligence Is Far From Matching Humans, Panel …

To save articles or get newsletters, alerts or recommendations all free.

Don’t have an account yet? Create an account

Subscribed through iTunes and need an NYTimes.com account? Learn more

Need to connect your Home Delivery subscription to NYTimes.com? Link your subscription

Log in with Facebook

Log in with Google

or

Email address

Password

Remember Me

Read this article:

Artificial Intelligence Is Far From Matching Humans, Panel …

Artificial Intelligence Makes the Phone a Personal …

To save articles or get newsletters, alerts or recommendations all free.

Don’t have an account yet? Create an account

Subscribed through iTunes and need an NYTimes.com account? Learn more

Need to connect your Home Delivery subscription to NYTimes.com? Link your subscription

Log in with Facebook

Log in with Google

or

Email address

Password

Remember Me

Read the original:

Artificial Intelligence Makes the Phone a Personal …

The Guardian view on artificial intelligence: look out, its …

A monk comes face to face with his robot counterpart called Xianer at a Buddhist temple on the outskirts of Beijing. Photograph: Kim Kyung-Hoon/Reuters

Google artificial intelligence project DeepMind is building software to trawl through millions of patient records from three NHS hospitals to detect early signs of kidney disease. The project raises deep questions not only about data protection but about the ethics of artificial intelligence. But these are not the obvious questions about the ethics of autonomous, intelligent computers.

Computer programs can now do some things that it once seemed only human beings could do, such as playing an excellent game of Go. But even the smartest computer cannot make ethical choices, because it has no purpose of its own in life. The program that plays Go cannot decide that it also wants a driving licence like its cousin, the program that drives Googles cars.

The ethical questions involved in the deal are partly political: they have to do with trusting a private US corporation with a great deal of data from which it hopes in the long term to make a great deal of money. Further questions are raised by the mere existence, or construction, of a giant data store containing unimaginable amounts of detail about patients and their treatments. This might yield useful medical knowledge. It could certainly yield all kinds of damaging personal knowledge. But questions of medical confidentiality, although serious, are not new in principle or in practice and they may not be the most disturbing aspects of the deal.

What frightens people is the idea that we are constructing machines that will think for themselves, and will be able to keep secrets from us that they will use to their own advantage rather than to ours. The tendency to invest such powers in lifeless and unintelligent things goes back to the very beginnings of AI research and beyond.

In the 1960s, Joseph Weizenbaum, one of the pioneers of computer science, created the chatbot Eliza, which mimicked a non-directional psychoanalyst. It used cues supplied by the users Im worried about my father to ask open-ended questions: How do you feel about your father? The astonishing thing was that students were happy to answer at length, as if they had been asked by a sympathetic, living listener. Weizenbaum was horrified, especially when his secretary, who knew perfectly well what Eliza was, asked him to leave the room while she talked to it.

Elizas latest successor, Xianer, the Worthy Stupid Robot Monk, functions in a Buddhist temple in Beijing, where it dispenses wisdom in response to questions asked through a touchpad on his chest. People seem to ask it serious questions such as What is love?, How do I get ahead in life?; the answers are somewhere between a horoscope and a homily. Since they are not entirely predicable, Xianer is treated as a primitive kind of AI.

Most discussions of AI and most calls for an ethics of AI assume we will have no problem recognising it once it emerges. The examples of Eliza and Xianer show this is questionable. They get treated as intelligent even though we know they are not. But that is only one error we could make when approaching the problem. We might also fail to recognise intelligence when it does exist, or while it is emerging.

The myth of Frankensteins monster is misleading. There might be no lightning bolt moment when we realise that it is alive and uncontrollable. Intelligent brains are built from billions of neurones that are not themselves intelligent. If a post-human intelligence arises, it will also be from a system of parts that do not, as individuals, share in the post-human intelligence of the whole. Parts of it would be human. Parts would be computer systems. No part could understand the whole but all would share its interests without completely comprehending them.

Such hybrid systems would not be radically different from earlier social inventions made by humans and their tools, but their powers would be unprecedented. Constructing and enforcing an ethical framework for them would be as difficult as it has been to lay down principles of international law. But it may become every bit as urgent.

Go here to read the rest:

The Guardian view on artificial intelligence: look out, its …

The Future of Artificial Intelligence – Science Friday

Skip to content Fusion of human head with artificial intelligence, from Shutterstock

Technologist Elon Musk, Bill Gates, and Steve Wozniak named artificial intelligence as one of humanitys biggest existential risks. Will robots outpace humans in the future? Should we set limits on A.I.? Our panel of experts discusses what questions we should ask as research on artificial intelligence progresses.

Plus,

Stuart Russell

Stuart Russell is a computer science and engineering professor at the University of California, Berkeley in Berkeley, California.

Eric Horvitz

Eric Horvitz is Distinguished Scientist at Microsoft Research and co-director of the Microsoft Research Lab in Redmond, Washington.

Max Tegmark

Max Tegmark is a physics professor at the Massachusetts Institute of Technology in Cambridge, Massachusetts.

Alexa Lim is Science Fridays associate producer. Her favorite stories involve space, sound, and strange animal discoveries.

Should we worry about the imminent rise of robots in our lives?

Read More

This year’s SXSW Film festival highlighted our fears about emerging tech and concerns facing online and gaming communities.

Read More

19 West 44th Street, Suite 412

New York, NY 10036

Science Friday is produced by the Science Friday Initiative, a 501(c)(3) nonprofit organization. Created by Bluecadet

Read more here:

The Future of Artificial Intelligence – Science Friday

Artificial Intelligence

Ray Kurzweil, the computer sceintist, defined intelligence as set of skills that allows humans

to solve problem with limited resources. The skills expected from an intelligent person are

learning, abstract thought, planning, imagination, and creativity. This list covers the most

important aspects of human intelligence.

Read more from the original source:

Artificial Intelligence

Artificial Intelligence :: Essays Papers

Artificial Intelligence

The computer revolution has influenced everyday matters from the way letters are written to the methods in which our banks, governments, and credit card agencies keep track of our finances. The development of artificial intelligence is just a small percentage of the computer revolution and how society deals with, learns, and incorporates artificial intelligence. It will only be the beginning of the huge impact and achievements of the computer revolution.

A standard definition of artificial intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. However, within this definition, several issues and views still conflict because of ways of interpreting the results of AI programs by scientists and critics. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in comparison to human intelligence? For example, what matters about a chess-playing program is how good it is. Can it possibly beat chess grand masters? There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world. According to this theoretical approach, what matters is not the input-output relations of the computer, but also what the program can tell us about actual human cognition (Ptack, 1994).

From this point of view, artificial intelligence can not only give a commercial or business world the advantage, but also a understanding and enjoyable beneficial extend to everyone who knows how to use a pocket calculator. It can outperform any living mathematician at multiplication and division, so it qualifies as intelligent under the definition of artificial intelligence. This fact does not entertain the psychological aspect of artificial intelligence, because such computers do not attempt to mimic the actual thought processes of people doing arithmetic (Crawford, 1994). On the other hand, AI programs that simulate human vision are theoretical attempts to understand the actual processes of human beings and how they view and interpret the outside world. A great deal of the debate about artificial intelligence confuses the two views, so that sometimes success in artificial intelligence’s practical application is supposed to provide structured or theoretical understanding in this branch of science known as cognitive science. Chess-playing programs are a good example. Early chess-playing programs tried to mimic the thought processes of actual chess players, but they were not successful. Ignoring the thoughts of chess masters and just using the much greater computing powers of modern hard wares have achieved more recent successes. This approach, called brute force, comes from the fact that specially designed computers can calculate hundreds of thousands or even millions of moves, which is something no human chess player can do (Matthys, 1995). The best current programs can beat all but the very best chess players, but it would be a mistake to think of them as substantial information to artificial intelligence’s cognitive science field (Ptacek, 1994). They tell us almost nothing about human cognitions or thought processes, except that an electrical machine working on different principles can outdo human beings in playing chess, as it can defeat human beings in doing arithmetic.

Assuming that artificial intelligence’s practical applications, or AIPA, is completely successful and that society will soon have programs whose performance can equal or beat that of any human in any comprehension task at all. Assume machines existed that could not only play better chess but had equal or better comprehension of natural languages, write equal or better novels and poems, and prove equal or better math and science equations and solutions. What should society make of these results? Even with the cognitive science approach, there are some further distinctions to be made. The most influential claim is if scientists programmed a digital computer with the right programs, and if it had the right inputs and outputs, then it would have thoughts and feelings in exactly the same sense in which humans have thoughts and feelings. In accordance to this view, the computer programming and AICS program is not just mimicking intelligent thought patterns, it actually is going through these thought processes. Again the computer is not just a substitution of the mind. The newly programmed computer would literally have a mind. So if there were an AIPA program that appropriately matched human cognition, scientists would artificially have created an actual mind.

It seems that artificial intelligence is possibly a program that will one day exist. The mind is just the program in hardware of the human brain, but this created mind could also be programmed into computers manufactured by IBM. However, there is a big difference from artificial intelligence and various forms of AICS. Though, it is the weakest claim of artificial intelligence stating that the appropriately programmed computer is a tool that can be used in the study of human cognition. By attempting to impersonate the formal structure of cognitive processes on a computer, we can better come to understand cognition. From this weaker view, the computer plays the same role in the study of human beings that it plays in any other discipline (Taubes, 1995; Crawford, 1994).

We use computers to simulate the behavior of weather patterns, airline flight schedules, and the flow of money in things. No one began programming any of these computer operations so the computer program literally makes brainstorms, or that the computer will literally take off and fly to San Diego when we are doing a computer simulation of airline flights. Also, no one thinks that the computer simulation of the flow of money will give us a better chance at preparing for things like The Great Depression. To stand by the weaker conception of artificial intelligence, society should not think that a computer simulation of cognitive processes actually did any real thinking.

According to this weaker, or more cautious, version of AICS, we can use the computer to do models or simulations of mental processes, as we can use the computer to do simulations of any other process as long as we write a program that will allow us to do so. Since this version of AICS is more cautious, it is probably safe to assume that it is less likely to be controversial, and more likely to be heading towards real possibilities.

Bibliography:

Crawford, Robert, Machine Dreams, Vol. 97, Technology Review, 1 Feb 1994, pp. 77.

Matthys, Erick, Harnessing technology for the future, Vol. 75, Military Review, 1 May 1995, pp. 71

Morss, Ruth, Artificial intelligence guru cultivate natural language, Vol. 14, Boston Business Journal, 20 Jan 1995, pp. 19

Ptacek, Robin, Using artificial intelligence, Vol. 28, Futurist, 1 Jan 1994, pp.38

Taubes, Gary, The rise and fall of thinking machines, Vol. 1995, Inc., 12 Sep 1995, pp. 61

Originally posted here:

Artificial Intelligence :: Essays Papers

What Is Artificial Intelligence? (with picture) – wiseGEEK

burcinc Post 3

I find artificial intelligence kind of scary. I realize that it can be very practical and useful for some things. But I actually feel that artificial intelligence that is developed too far may actually be dangerous to humanity. I don’t like the idea of a machine being smarter and more capable than a human.

@SteamLouis– But artificial intelligence is a part of everyday life. Everything from computer games, to financial analysis software to voice-recognition security systems are types of artificial intelligence. They are forms of weak AI but are artificial intelligence nonetheless.

When people think of AI, robots are the first things to come to mind. And there are huge advancements in this area as well. You may not be familiar with them but there are numerous robots on the market that are very popular. Some act like personal assistants and respond to voice command for various tasks. Others are in the form of house appliances or small gadgets and all serve some sort of use for every day living.

Artificial intelligence doesn’t appear to be advancing as quickly as many of us expected. I remember that in the beginning of the 21st century, there was so much speculation about how artificial intelligence, like robots, would become a regular part of our life in this century. Fifteen years down the line, nothing of the sort has happened. Scientists talk about the same thing, but now they’re talking about 2050 and beyond. I personally don’t think that robots will be a part of regular life even in 2050. Artificial intelligence is not easy to build and use and it’s extremely expensive.

Go here to read the rest:

What Is Artificial Intelligence? (with picture) – wiseGEEK

Robotics and Artificial Intelligence | Computer Science …

Artificial Intelligence (AI) is a general term that implies the use of a computer to model and/or replicate intelligent behavior. Research in AI focuses on the development and analysis of algorithms that learn and/or perform intelligent behavior with minimal human intervention. These techniques have been and continue to be applied to a broad range of problems that arise in robotics, e-commerce, medical diagnosis, gaming, mathematics, and military planning and logistics, to name a few.

Several research groups fall under the general umbrella of AI in the department, but are disciplines in their own right, including: robotics, natural language processing (NLP), computer vision, computational biology, and e-commerce. Specifically, research is being conducted in estimation theory, mobility mechanisms, multi-agent negotiation, natural language interfaces, machine learning, active computer vision, probabilistic language models for use in spoken language interfaces, and the modeling and integration of visual, haptic, auditory and motor information.

Read more:

Robotics and Artificial Intelligence | Computer Science …

Artificial Intelligence: Foundations of Computational Agents

We are currently planning a second edition of the book and are soliciting feedback from instructors, students, and other readers. We would appreciate any feedback you would like to provide, including:

Please email David and Alan any feedback you may have.

Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, 2010, is a book about the science of artificial intelligence (AI). It presents artificial intelligence as the study of the design of intelligent computational agents. The book is structured as a textbook, but it is accessible to a wide audience of professionals and researchers. In the last decades we have witnessed the emergence of artificial intelligence as a serious science and engineering discipline. This book provides the first accessible synthesis of the field aimed at undergraduate and graduate students. It provides a coherent vision of the foundations of the field as it is today. It aims to provide that synthesis as an integrated science, in terms of a multi-dimensional design space that has been partially explored. As with any science worth its salt, artificial intelligence has a coherent, formal theory and a rambunctious experimental wing. The book balances theory and experiment, showing how to link them intimately together. It develops the science of AI together with its engineering applications.

You can search the book and the website:

We are requesting feedback on errors for this edition and suggestions for subsequent editions. Please email any comments to the authors. We appreciate feedback on references that we are missing (particularly good recent surveys), attributions that we should have made, what could be explained better, where we need more or better examples, topics that we should cover in more or less detail (although we are reluctant to add more topics; we’d rather explain fewer topics in more detail), topics that could be omitted, as well as typos. This is meant to be a textbook, not a summary of (recent) research.

Follow this link:

Artificial Intelligence: Foundations of Computational Agents

Artificial intelligence research and development – AI. Links999.

Many academic institutions, companies and corporations worldwide are involved in artificial intelligence research. While some focus exclusively on the hardware aspect of robotic machinery and androids – such as the prosthetics involved in creating elbow and knee joints and the artificial intelligence needed to control these, for example, others are focused on the workings of the artificial mind, creating deductive reasoning and other complex issues that mimic our own brain and our physical neural network.

Hardware issues of artificial intelligence can be the control of a body, as in the case of an intelligent, humanoid, robot. But also the hard-wiring of a simulated brain, as is the case with Asimov’s “positronic” brain, or the brain of “Data”, the android in the Star Trek television series.

Software issues can involve logic, action-reaction, response, speech and visual recognition tasks and of course the programming languages needed to write these programs.

Designing and creating a neural network similar to our own is one of the most difficult aspects of creating an artificial intelligence (see also Neural Networks, Nanotechnology and Robotics). This approach requires both hardware and software or wetware, also known as biological hardware.

The human neural network is a vastly complex development spanning millions of years of biological evolution with the core going back maybe a billion years or more, to the very first “life” form.

Most parts of this network are autonomous and require no conscious thought. If we had to consciously tell our bodies to breathe air, pump blood or instruct muscles to contract or relax for movement and other bodily functions, we wouldn’t be here. It would be impossible.

Thus much of our functioning is subconscious and autonomous, with only our reasoning mind, and our “self”, whatever that may be, in need of constant attention.

Designing an artificial intelligence of this complexity is not possible with our current technological knowledge and we may never achieve anything closely resembling it. (Unless, of course, we design intelligent machines to do it for us.)

Do we need artificial intelligence?

With a growing world population, many of which are unemployed and uneducated, do we really need artificial intelligences that cost billions to research and to build? Wouldn’t it be better to spend all that money on developing the human condition instead?

The simple answer would be Yes. In order to create a more level playing field for humanity we really need to educate those that lack education and provide positive employment for them. With all that brain power available who needs artificial intelligences. But that is easier said than done.

Until we have a unified world government that would allocate resources on a more equal scale it doesn’t seem likely. Countries with the highest unemployment and lowest educational level generally suffer from inept and corrupt governments, and under current international agreements, there is no interference in internal affairs.

The best we can do is let advanced nations develop advanced technologies, such as artificial intelligence, and use these developments at some future time to aid our poorer fellow humans.

So perhaps we don’t need artificial intelligence but it may provide the way to a better future for all of us.

See also: Neural Networks, Nanotechnology and Robotics.

Read more from the original source:

Artificial intelligence research and development – AI. Links999.

The Personality Forge AI – Artificial Intelligence Chat Bots

350

million

total messages

April 8, 2016

Data and Code Upgrade

I just completed a data and code upgrade. It touched on every part of the site. I did thorough testing and fixed everything I found (and fixed and upgraded many areas, too) but if you run into anything weird, from broken links to troubles with AIScript, memories, etc, for the time being please email me directly at benji@personalityforge.com rather than using the Bug Reporting tool. Thanks!

March 23, 2016

Backup Your Bots From Time to Time

Remember to export your chat bot from time to time when you’re working on it. This allows you to restore it should anything happen – be it a the rare server crash with data loss, or accidentally deleting Keyphrases or Seeks during development.

Welcome to the The Personality Forge, an advanced artificial intelligence platform for creating chat bots. The Personality Forge’s AI Engine integrates memories, emotions, knowledge of hundreds of thousands of words, sentence structure, unmatched pattern-matching capabilities, and a scripting language called AIScript. It’s easy enough for someone without any programming experience to use. Come on in, and chat with bots and botmasters, then create your own artificial intelligence personalities, and turn them loose to chat with both real people and other chat bots. Here you’ll find thousands of AI personalities, including bartenders, college students, flirts, rebels, adventurers, mythical creatures, gods, aliens, cartoon characters, and even recreations of real people.

Personality Forge chat bots form emotional relationships with and have memories about both people and other bots. True language comprehension is in constant development, as is a customizable Flash interface. Transcripts of every bot’s conversations are kept so you can read what your bot has said, and see their emotional relationships with other people and other bots.

Here is the original post:

The Personality Forge AI – Artificial Intelligence Chat Bots

AI Overview | AITopics

Broad Discussions of Artificial Intelligence

Exactly what the computer provides is the ability not to be rigid and unthinking but, rather, to behave conditionally. That is what it means to apply knowledge to action: It means to let the action taken reflect knowledge of the situation, to be sometimes this way, sometimes that, as appropriate…

In sum, technology can be controlled especially if it is saturated with intelligence to watch over how it goes, to keep accounts, to prevent errors, and to provide wisdom to each decision. — Allen Newell, from Fairy Tales

If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here’s the definition the Association for the Advancement of Artificial Intelligence offers on its home page: “the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”

The National Academy of Science offers the following short summary of the field: “One of the great aspirations of computer science has been to understand and emulate capabilities that we recognize as expressive of intelligence in humans. Research has addressed tasks ranging from our sensory interactions with the world (vision, speech, locomotion) to the cognitive (analysis, game playing, problem solving). This quest to understand human intelligence in all its forms also stimulates research whose results propagate back into the rest of computer sciencefor example, lists, search, and machine learning.” From Section 6: Achieving Intelligence of the 2004 report by the Computer Science and Telecommunications Board (CSTB) Computer Science: Reflections on the Field, Reflections from the Field (2004).

However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) . . .

Here is the original post:

AI Overview | AITopics

artificial intelligence – Business Intelligence

20 February, 2015 Myth Busting Artificial Intelligence

Weve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. Theres also a lot of confusion about what they really mean and whats actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion. So, let

Read more:

artificial intelligence – Business Intelligence

Cleverbot.com – a clever bot – speak to an AI with some …

About Cleverbot

The site Cleverbot.com started in 2006, but the AI was ‘born’ in 1988, when Rollo Carpenter saw how to make his machine learn. It has been learning ever since!

Things you say to Cleverbot today may influence what it says to others in future. The program chooses how to respond to you fuzzily, and contextually, the whole of your conversation being compared to the millions that have taken place before.

Many people say there is no bot – that it is connecting people together, live. The AI can seem human because it says things real people do say, but it is always software, imitating people.

You’ll have seen scissors on Cleverbot. Using them you can share snippets of chats with friends on social networks. Now you can share snips at Cleverbot.com too!

When you sign in to Cleverbot on this blue bar, you can:

Tweak how the AI responds – 3 different ways! Keep a history of multiple conversations Switch between conversations Return to a conversation on any machine Publish snippets – snips! – for the world to see Find and follow friends Be followed yourself! Rate snips, and see the funniest of them Reply to snips posted by others Vote on replies, from awful to great! Choose not to show the scissors

Continue reading here:

Cleverbot.com – a clever bot – speak to an AI with some …

Artificial Intelligence: Learning to Learn – Education

2011 VIRTUAL SCIENCE FAIR ENTRY

The purpose of this project was to determine the best algorithm for strategy games.

Computer Science

9thGrade

Requires technical knowledge

There are no costs associated with this project.

There are no safety hazards associated with this project.

The total time taken to complete this project is as follows:

The purpose of this project was to determine the best algorithm for strategy games.

The goal of endowing human-like intelligence to inanimate objects has a long history. Modern computers can perform millions of calculations per second, but even with all of this remarkable speed, true logic has yet to be achieved. Every year that passes, computers come closer and closer to achieving this goal, or at least mimicking true logic. Game strategy is one of the most common applications of artificial intelligence. Algorithms are a set of instructions a computer follows to achieve a task or goal. There are three main types of algorithms for intelligence in games: Alpha-beta, learning, and hybrids. Chess was one of the first games to implement artificial intelligence with the discovery of the Alpha-beta algorithm in 1958 by scientists at Carnegie-Mellon University (Friedel, n.d.). The Alpha-beta algorithm was the first feasible algorithm that could be used for strategy in games. As artificial intelligence in games evolved and became more complex, a more modern learning approach has been adopted. Even though there have been major advancements in both learning style algorithms and Alpha-beta algorithms; a hybrid utilizing elements of both algorithms results in a stronger, more efficient, and faster program. On the forefront of the quest for artificial intelligence, these algorithms are playing vastly important roles.

The Alpha-beta algorithm has a long history of success. The first use of the algorithm in a game was in the 70s and 80s by the Belle computer. Belle remained the champion of computer chess until being superseded by the Cray supercomputer (Friedel, n.d.). Belle was the first computer to be successful using the early forms of the Alpha-beta algorithm. Deep Blue later used the algorithm in order to defeat chess grandmaster Garry Kasparov; this was a major development for the artificial intelligence community as it was the first time in history a computer had beaten a chess grandmaster in a standard match. Over time, the algorithm has been revised, updated, and modified to the point where several versions of the algorithm exist that all use the same core principles.

The Alpha-beta algorithm uses brute-force calculations (thousands every second) to make decisions. The Alpha-beta algorithm uses the minimax principle (one player tries to maximize their score while the other tries to minimize it) and efficient evaluation techniques in order to achieve its logic. Alpha-beta is a game tree searcher, or in other words, it forms a hierarchy of possible moves down to a defined level (i.e. six moves). In some variations, eliminating symmetries and rotations is used to reduce the size of the game tree (Lin, 2003). After the tree is formed the algorithm then proceeds to evaluate each position in the tree based on a set of rules intended to make the computer play stronger, this is called heuristics. The reason why Alpha-beta is fast, yet strong is that it ignores portions of the game board (Lin, 2003). It decides which portions to ignore based on finding the best move per level (or move) and ignoring all the moves that arent the best and the moves under them. Alpha-beta can calculate two levels of moves with 900 positions in 0.018 seconds, three levels of moves with 27,000 positions in 0.54 seconds, four levels of moves with 810,000 positions in 16.2 seconds, and so on. These efficiency-improving techniques are responsible for the small calculation times and improved game strategy that the algorithm provides.

Learning style algorithms are another popular type of algorithm for game use. Learning style algorithms arent necessarily a recent creation. They have been in use for approximately thirty years, but have been met with limited success until recently. In this approach, an algorithm uses its own experiences, or a large database of pre-played games to determine the best moves. Unfortunately, learning algorithms have also incorporated the bad strategies utilized by novice players. Over time, improvements have been made so that an algorithm can be a threat to intermediate players in most action games; however, learning algorithms are often unsuccessful in games requiring strategic play. The Chinook program uses the most notable learning algorithm. The program spent eighteen years calculating every possible move for the game of checkers. But the Chinooks algorithm is considered by some not to be a true learning algorithm since it already knows all of the possible outcomes for every move (Chang, 2007). Chinook, however, does adjust its playing style for each players strategy; this is where its element of learning comes into play (Chang, 2007). Learning algorithms are considered closer to true intelligence than other algorithms that use brute-force calculations such as Alpha-beta. Compared to pure calculation algorithms, they play games more like humans and even show very limited aspects of creativity and self-formed strategy.

A hybrid algorithm combines the brute-force style of the Alpha-beta algorithm with the flexibility of the learning style algorithm. This method insures that the full ability of the computer is used while it is free to adapt to each players individual game style. Chinook successfully utilized this technique to make a program that is literally unbeatable. Because of the Chinook program, the game of checkers has been solved. No matter how well an opponent plays, the best they can do is end in a draw (Chang 2007).

Other champion programs have used just one style of algorithm in order to win. As a result, no particular algorithm has been measured or proven to be dominant. Game developers choose which algorithm to use based largely on personal preferences and on a lack of consensus from the artificial intelligence community as to which algorithm is superior. There are weaknesses that can be used to determine which algorithm will prove to be inferior. For example, the Alpha-beta algorithm does not generate all possible moves from the current condition of the game. Alpha-beta assumes that the opponent will make the best possible move available. If a player makes a move that is not in their best interest, the algorithm will not know how to respond because that moves game tree has not been calculated. The opponent can trick the algorithm by making sup-par moves, and forcing it to recalculate. It is also important to note that the Alpha-beta algorithm can use tremendous amounts of time when calculating more than a couple of moves. The learning algorithm has its flaws, too. If it encounters an unknown strategy, the algorithm will be helpless against its opponents moves. The most likely way to minimize these flaws is to combine these algorithms into a hybrid. If the hybrid encounters an unknown strategy, it can then use the Alpha-beta style game tree to determine the possible moves from that point. Likewise, if the opponent uses a move not calculated by the brute-force method, it can then use learned strategies to defend itself. The hybrid algorithm will be faster and have better winning strategies than either the Alpha-beta, or the learning style algorithms.

The experiment clearly demonstrated the alpha-beta algorithm won more games, took less time to generate a move, and took less moves to win. It was clearly superior to both the hybrid and learning algorithms.

This chart shows the percent each algorithm won out of 9,000 games of checkers. Alpha-beta scored the highest percentage of wins, the hybrid came in second, and the learning algorithm scored the lowest percentage.

This chart displays the average time it took each algorithm to generate a move. In this situation the lowest scoring algorithm preformed the best.

This chart represents the average number of moves it took each algorithm to win a game. As with the previous chart, the lowest scoring algorithm performed the best.

Evidence gathered from the experiments showed that the Alpha-beta algorithm was far superior to both the hybrid and learning algorithms. This can be concluded based on three distinct factors: the percentage of wins, the average time taken to make a move, and the average number of moves generated in order to win a game. In each of these categories the Alpha-beta algorithm preformed the best in every category. The hybrid performed better than the learning, but worse than the Alpha-beta. The Learning algorithm performed the worst.

This experiment included 9,000 trials; therefore, the experimental error was minimal. The only measured value that needed to be considered for errors was the average amount of time each algorithm used to generate a move. The computer can record the precise time, but the time was rounded so the time-keeping process would not affect the outcome of an experiment. However, the difference between the averages was not at all significant, and even if the computer recorded the results with absolute precision the conclusion would remain unchanged. Another aspect to consider about the results was the possibility of a recursion loop (basically, when the algorithm gets stuck in a repeating loop). Although the algorithm will break from the loop, it would cause the average time spent on a move to go up considerably for that game. The last error that needed to be considered was the inefficiencies in an algorithms programming. If an algorithm was erroneously programmed in a way that was inefficient, it would obviously damage the overall performance.

Chang, K.(2007, July 19). Computer checkers program is invincible.Retrieved from http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html

Frayn, C.(2005, August 1). Computer chess programming theory. Retrieved from http://www.frayn.net/beowulf/theory.html

Friedel, F.(n.d.).A short history of computer chess. Retrieved from http://www.chessbase.com/columns/column.asp?pid=102

Lin, Y. (2003).Game trees. Retrieved from http://www.ocf.berkeley.edu/~yosenl/extras/alphabeta/alphabeta.html

For a demo of the program email connerruhl at me.com

Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com’s website and Science Fair Project Ideas is covered by Education.com’s Privacy Policy and site Terms of Use, which include limitations on Education.com’s liability.

Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.

Continue reading here:

Artificial Intelligence: Learning to Learn – Education


12345...102030...