{"id":173549,"date":"2016-08-30T23:03:18","date_gmt":"2016-08-31T03:03:18","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/history-of-artificial-intelligence-wikipedia-the-free\/"},"modified":"2016-08-30T23:03:18","modified_gmt":"2016-08-31T03:03:18","slug":"history-of-artificial-intelligence-wikipedia-the-free","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/history-of-artificial-intelligence-wikipedia-the-free\/","title":{"rendered":"History of artificial intelligence &#8211; Wikipedia, the free &#8230;"},"content":{"rendered":"<p><p>    The history of artificial intelligence (AI) began    in antiquity, with myths, stories and rumors    of artificial beings endowed with intelligence or consciousness    by master craftsmen; as Pamela McCorduck writes, AI began with    \"an ancient wish to forge the gods.\"  <\/p>\n<p>    The seeds of modern AI were planted by classical philosophers    who attempted to describe the process of human thinking as the    mechanical manipulation of symbols. This work culminated in the    invention of the programmable digital computer in the 1940s, a    machine based on the abstract essence of mathematical    reasoning. This device and the ideas behind it inspired a    handful of scientists to begin seriously discussing the    possibility of building an electronic brain.  <\/p>\n<p>    The Turing    test was proposed by British mathematician Alan Turing in his    1950 paper Computing Machinery and    Intelligence, which opens with the words: \"I propose to    consider the question, 'Can machines think?'\" The term    'Artificial Intelligence'    was created at a conference held at Dartmouth    College in 1956.[2]Allen Newell,    J. C. Shaw,    and Herbert A. Simon pioneered the newly    created artificial intelligence field with the Logic Theory Machine (1956), and the    General Problem Solver in    1957.[3] In 1958, John McCarthy and    Marvin    Minsky started the MIT Artificial Intelligence lab with    $50,000.[4] John McCarthy also created    LISP    in the summer of 1958, a programming language still    important in artificial intelligence research.[5]  <\/p>\n<p>    In 1973, in response to the criticism of James    Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding    undirected research into artificial intelligence. Seven years    later, a visionary initiative by the Japanese Government inspired    governments and industry to provide AI with billions of    dollars, but by the late 80s the investors became disillusioned    and withdrew funding again.  <\/p>\n<p>    McCorduck (2004) writes    \"artificial intelligence in one    form or another is an idea that has pervaded Western    intellectual history, a dream in urgent need of being    realized,\" expressed in humanity's myths, legends, stories,    speculation and clockwork automatons.  <\/p>\n<p>    Mechanical men and artificial beings appear in Greek    myths, such as the golden robots of Hephaestus and    Pygmalion's Galatea.[7] In the Middle    Ages, there were rumors of secret mystical or alchemical means of placing    mind into matter, such as Jbir ibn Hayyn's Takwin, Paracelsus' homunculus and    Rabbi Judah Loew's Golem.[8] By the 19th    century, ideas about artificial men and thinking machines were    developed in fiction, as in Mary Shelley's Frankenstein    or Karel    apek's R.U.R.    (Rossum's Universal Robots), and speculation, such as    Samuel Butler's \"Darwin among the Machines.\" AI    has continued to be an important element of science fiction    into the present.  <\/p>\n<p>    Realistic humanoid automatons were built by craftsman from every    civilization, including Yan    Shi,[11]Hero of    Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the    sacred    statues of ancient Egypt and Greece. The    faithful believed that craftsman had imbued these figures with    very real minds, capable of wisdom and emotionHermes    Trismegistus wrote that \"by discovering the true nature of    the gods, man has been able to reproduce it.\"[15][16]  <\/p>\n<p>    Artificial intelligence is based on the assumption that the    process of human thought can be mechanized. The study of    mechanicalor \"formal\"reasoning has a long history. Chinese, Indian and Greek philosophers all developed    structured methods of formal deduction in the first millennium    BCE. Their ideas were developed over the centuries by    philosophers such as Aristotle (who gave a formal analysis of the    syllogism),    Euclid (whose    Elements was a model of formal    reasoning), Muslim mathematician al-Khwrizm (who    developed algebra    and gave his name to \"algorithm\") and European scholastic    philosophers such as William of Ockham and Duns    Scotus.[17]  <\/p>\n<p>    Majorcan philosopher Ramon Llull (12321315) developed several    logical machines devoted to the production of knowledge    by logical means;[18] Llull    described his machines as mechanical entities that could    combine basic and undeniable truths by simple logical    operations, produced by the machine by mechanical meanings, in    such ways as to produce all the possible knowledge.[19] Llull's work had a great    influence on Gottfried Leibniz,    who redeveloped his ideas.[20]  <\/p>\n<p>    In the 17th century, Leibniz, Thomas Hobbes    and Ren Descartes explored the possibility    that all rational thought could be made as systematic as    algebra or geometry.[21]Hobbes    famously wrote in Leviathan: \"reason is nothing but    reckoning\".[22]Leibniz    envisioned a universal language of reasoning (his characteristica    universalis) which would reduce argumentation to    calculation, so that \"there would be no more need of    disputation between two philosophers than between two    accountants. For it would suffice to take their pencils in    hand, down to their slates, and to say each other (with a    friend as witness, if they liked): Let us    calculate.\"[23] These philosophers had begun to    articulate the physical symbol system hypothesis    that would become the guiding faith of AI research.  <\/p>\n<p>    In the 20th century, the study of mathematical logic provided the    essential breakthrough that made artificial intelligence seem    plausible. The foundations had been set by such works as    Boole's    The Laws of Thought and Frege's    Begriffsschrift. Building on Frege's    system, Russell and Whitehead presented a formal    treatment of the foundations of mathematics in their    masterpiece, the Principia Mathematica in    1913. Inspired by Russell's success, David    Hilbert challenged mathematicians of the 1920s and 30s to    answer this fundamental question: \"can all of mathematical    reasoning be formalized?\"[17] His question was    answered by Gdel's incompleteness proof,    Turing's    machine and Church's Lambda    calculus.[17][24] Their answer was surprising in    two ways.  <\/p>\n<p>    First, they proved that there were, in fact, limits to what    mathematical logic could accomplish. But second (and more    important for AI) their work suggested that, within these    limits, any form of mathematical reasoning could be    mechanized. The Church-Turing    thesis implied that a mechanical device, shuffling symbols    as simple as 0 and 1, could imitate any conceivable process of    mathematical deduction. The key insight was the Turing    machinea simple theoretical construct that captured the    essence of abstract symbol manipulation. This invention would    inspire a handful of scientists to begin discussing the    possibility of thinking machines.[17][26]  <\/p>\n<p>    Calculating machines were built in antiquity and improved    throughout history by many mathematicians, including (once    again) philosopher Gottfried Leibniz.    In the early 19th century, Charles Babbage designed a    programmable computer (the Analytical Engine), although it    was never built. Ada Lovelace speculated that the machine    \"might compose elaborate and scientific pieces of music of any    degree of complexity or extent\".[27] (She is often    credited as the first programmer because of a set of notes    she wrote that completely detail a method for calculating    Bernoulli numbers with the Engine.)  <\/p>\n<p>    The first modern computers were the massive code breaking    machines of the Second World War    (such as Z3, ENIAC and Colossus). The latter two of these    machines were based on the theoretical foundation laid by    Alan    Turing[28] and developed by John von    Neumann.[29]  <\/p>\n<p>    In the 1940s and 50s, a handful of scientists from a variety of    fields (mathematics, psychology, engineering, economics and    political science) began to discuss the possibility of creating    an artificial brain. The field of artificial intelligence research    was founded as an academic discipline in 1956.  <\/p>\n<p>    The earliest research into thinking machines was inspired by a    confluence of ideas that became prevalent in the late 30s, 40s    and early 50s. Recent research in neurology had shown that the brain was an    electrical network of neurons that fired in all-or-nothing pulses.    Norbert    Wiener's cybernetics described control and    stability in electrical networks. Claude    Shannon's information theory described digital    signals (i.e., all-or-nothing signals). Alan Turing's    theory of computation showed that    any form of computation could be described digitally. The close    relationship between these ideas suggested that it might be    possible to construct an electronic    brain.[30]  <\/p>\n<p>    Examples of work in this vein includes robots such as W. Grey Walter's turtles and    the Johns Hopkins Beast. These machines    did not use computers, digital electronics or symbolic    reasoning; they were controlled entirely by analog    circuitry.[31]  <\/p>\n<p>    Walter    Pitts and Warren McCulloch analyzed    networks of idealized artificial neurons and showed how they    might perform simple logical functions. They were the first to    describe what later researchers would call a neural network.[32] One of    the students inspired by Pitts and McCulloch was a young Marvin Minsky,    then a 24-year-old graduate student. In 1951 (with Dean    Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most    important leaders and innovators in AI for the next 50 years.  <\/p>\n<p>    In 1950 Alan    Turing published a landmark paper in    which he speculated about the possibility of creating machines    that think.[34] He noted that \"thinking\" is    difficult to define and devised his famous Turing Test. If a machine could carry on a    conversation (over a teleprinter) that was indistinguishable from    a conversation with a human being, then it was reasonable to    say that the machine was \"thinking\". This simplified version of    the problem allowed Turing to argue convincingly that a    \"thinking machine\" was at least plausible and the paper    answered all the most common objections to the    proposition.[35] The Turing Test was the    first serious proposal in the philosophy of    artificial intelligence.  <\/p>\n<p>    In 1951, using the Ferranti Mark 1 machine of the University of Manchester,    Christopher Strachey wrote a    checkers program and Dietrich Prinz wrote one for    chess.[36]Arthur Samuel's checkers program,    developed in the middle 50s and early 60s, eventually achieved    sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be    used as a measure of progress in AI throughout its history.  <\/p>\n<p>    When access to digital computers    became possible in the middle fifties, a few scientists    instinctively recognized that a machine that could manipulate    numbers could also manipulate symbols and that the manipulation    of symbols could well be the essence of human thought. This was    a new approach to creating thinking machines.[38]  <\/p>\n<p>    In 1955, Allen Newell and (future Nobel Laureate)    Herbert A. Simon created the \"Logic    Theorist\" (with help from J. C. Shaw). The program would eventually    prove 38 of the first 52 theorems in Russell    and Whitehead's Principia Mathematica, and find    new and more elegant proofs for some.[39] Simon    said that they had \"solved the venerable mind\/body problem, explaining how a    system composed of matter can have the properties of    mind.\"[40] (This was an early statement of    the philosophical position John Searle would later call \"Strong AI\": that machines can contain    minds just as human bodies do.)[41]  <\/p>\n<p>    The Dartmouth Conference of    1956[42] was organized by Marvin Minsky,    John McCarthy and two    senior scientists: Claude Shannon and Nathan Rochester    of IBM. The proposal for    the conference included this assertion: \"every aspect of    learning or any other feature of intelligence can be so    precisely described that a machine can be made to simulate    it\".[43] The participants included    Ray    Solomonoff, Oliver Selfridge, Trenchard    More, Arthur Samuel, Allen Newell and Herbert A.    Simon, all of whom would create important programs during    the first decades of AI research.[44] At the    conference Newell and Simon debuted the \"Logic    Theorist\" and McCarthy persuaded the attendees to accept    \"Artificial Intelligence\" as the name of the field.[45] The 1956 Dartmouth conference    was the moment that AI gained its name, its mission, its first    success and its major players, and is widely considered the    birth of AI.[46]  <\/p>\n<p>    The years after the Dartmouth conference were an era of    discovery, of sprinting across new ground. The programs that    were developed during this time were, to most people, simply    \"astonishing\":[47] computers were solving algebra    word problems, proving theorems in geometry and learning to    speak English. Few at the time would have believed that such    \"intelligent\" behavior by machines was possible at all.[48] Researchers expressed an intense    optimism in private and in print, predicting that a fully    intelligent machine would be built in less than 20    years.[49] Government agencies like    ARPA poured money into    the new field.[50]  <\/p>\n<p>    There were many successful programs and new directions in the    late 50s and 1960s. Among the most influential were these:  <\/p>\n<p>    Many early AI programs used the same basic algorithm. To achieve    some goal (like winning a game or proving a theorem), they    proceeded step by step towards it (by making a move or a    deduction) as if searching through a maze, backtracking    whenever they reached a dead end. This paradigm was called    \"reasoning as search\".[51]  <\/p>\n<p>    The principal difficulty was that, for many problems, the    number of possible paths through the \"maze\" was simply    astronomical (a situation known as a \"combinatorial explosion\").    Researchers would reduce the search space by using heuristics or \"rules of thumb\" that would    eliminate those paths that were unlikely to lead to a    solution.[52]  <\/p>\n<p>    Newell    and Simon tried to capture a general version    of this algorithm in a program called the \"General Problem Solver\".[53] Other \"searching\" programs were    able to accomplish impressive tasks like solving problems in    geometry and algebra, such as Herbert Gelernter's Geometry    Theorem Prover (1958) and SAINT,    written by Minsky's student James    Slagle (1961).[54] Other    programs searched through goals and subgoals to plan actions,    like the STRIPS    system developed at Stanford to control the    behavior of their robot Shakey.[55]  <\/p>\n<p>    An important goal of AI research is to allow computers to    communicate in natural languages like    English. An early success was Daniel Bobrow's    program STUDENT, which could solve    high school algebra word problems.[56]  <\/p>\n<p>    A semantic net represents concepts (e.g.    \"house\",\"door\") as nodes and relations among concepts (e.g.    \"has-a\") as links between the nodes. The first AI program to    use a semantic net was written by Ross    Quillian[57] and the most successful (and    controversial) version was Roger Schank's Conceptual dependency    theory.[58]  <\/p>\n<p>    Joseph Weizenbaum's ELIZA could carry out    conversations that were so realistic that users occasionally    were fooled into thinking they were communicating with a human    being and not a program. But in fact, ELIZA had no idea what    she was talking about. She simply gave a canned    response or repeated back what was said to her, rephrasing    her response with a few grammar rules. ELIZA was the first    chatterbot.[59]  <\/p>\n<p>    In the late 60s, Marvin Minsky and Seymour    Papert of the MIT AI Laboratory proposed that AI research    should focus on artificially simple situations known as    micro-worlds. They pointed out that in successful sciences like    physics, basic principles were often best understood using    simplified models like frictionless planes or perfectly rigid    bodies. Much of the research focused on a \"blocks world,\"    which consists of colored blocks of various shapes and sizes    arrayed on a flat surface.[60]  <\/p>\n<p>    This paradigm led to innovative work in machine    vision by Gerald Sussman (who    led the team), Adolfo Guzman, David Waltz (who    invented \"constraint    propagation\"), and especially Patrick    Winston. At the same time, Minsky and Papert built    a robot arm that could stack blocks, bringing the blocks world    to life. The crowning achievement of the micro-world program    was Terry    Winograd's SHRDLU. It could communicate in ordinary English    sentences, plan operations and execute them.[61]  <\/p>\n<p>    The first generation of AI researchers made these predictions    about their work:  <\/p>\n<p>    In June 1963, MIT received a $2.2 million grant from the newly    created Advanced Research Projects Agency (later known as    DARPA). The money was    used to fund project MAC which subsumed the \"AI    Group\" founded by Minsky and McCarthy five years    earlier. DARPA    continued to provide three million dollars a year until the    70s.[66]DARPA made similar grants to Newell and    Simon's program at CMU and to the Stanford AI    Project (founded by John McCarthy in    1963).[67] Another important AI laboratory    was established at Edinburgh    University by Donald Michie in 1965.[68] These    four institutions would continue to be the main centers of AI    research (and funding) in academia for many years.[69]  <\/p>\n<p>    The money was proffered with few strings attached: J. C. R.    Licklider, then the director of ARPA, believed that his organization should    \"fund people, not projects!\" and allowed researchers to pursue    whatever directions might interest them.[70] This    created a freewheeling atmosphere at MIT that gave birth to the    hacker    culture,[71] but this \"hands off\" approach    would not last.  <\/p>\n<p>    In the 70s, AI was subject to critiques and financial setbacks.    AI researchers had failed to appreciate the difficulty of the    problems they faced. Their tremendous optimism had raised    expectations impossibly high, and when the promised results    failed to materialize, funding for AI disappeared.[72] At the same time, the field of    connectionism (or neural nets) was shut down almost completely    for 10 years by Marvin Minsky's devastating criticism of perceptrons.[73] Despite the    difficulties with public perception of AI in the late 70s, new    ideas were explored in logic programming, commonsense reasoning and many    other areas.[74]  <\/p>\n<p>    In the early seventies, the capabilities of AI programs were    limited. Even the most impressive could only handle trivial    versions of the problems they were supposed to solve; all the    programs were, in some sense, \"toys\".[75] AI    researchers had begun to run into several fundamental limits    that could not be overcome in the 1970s. Although some of these    limits would be conquered in later decades, others still stymie    the field to this day.[76]  <\/p>\n<p>    The agencies which funded AI research (such as the British government, DARPA and NRC) became    frustrated with the lack of progress and eventually cut off    almost all funding for undirected research into AI. The pattern    began as early as 1966 when the ALPAC report appeared criticizing machine    translation efforts. After spending 20 million dollars, the    NRC ended all    support.[84] In 1973, the Lighthill    report on the state of AI research in England criticized    the utter failure of AI to achieve its \"grandiose objectives\"    and led to the dismantling of AI research in that    country.[85] (The report specifically    mentioned the combinatorial explosion problem    as a reason for AI's failings.)[86]DARPA was deeply disappointed    with researchers working on the Speech    Understanding Research program at CMU and canceled an annual    grant of three million dollars.[87] By 1974,    funding for AI projects was hard to find.  <\/p>\n<p>    Hans    Moravec blamed the crisis on the unrealistic predictions of    his colleagues. \"Many researchers were caught up in a web of    increasing exaggeration.\"[88] However,    there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing    pressure to fund \"mission-oriented direct research, rather than    basic undirected research\". Funding for the creative,    freewheeling exploration that had gone on in the 60s would not    come from DARPA.    Instead, the money was directed at specific projects with clear    objectives, such as autonomous tanks and battle management    systems.[89]  <\/p>\n<p>    Several philosophers had strong objections to the claims being    made by AI researchers. One of the earliest was John Lucas, who argued that    Gdel's    incompleteness theorem    showed that a formal system (such as a computer program)    could never see the truth of certain statements, while a human    being could.[90]Hubert Dreyfus ridiculed the broken    promises of the 60s and critiqued the assumptions of AI,    arguing that human reasoning actually involved very little    \"symbol processing\" and a great deal of embodied, instinctive, unconscious \"know how\".[91][92]John Searle's Chinese Room argument, presented in 1980,    attempted to show that a program could not be said to    \"understand\" the symbols that it uses (a quality called    \"intentionality\"). If the symbols have no    meaning for the machine, Searle argued, then the machine can    not be described as \"thinking\".[93]  <\/p>\n<p>    These critiques were not taken seriously by AI researchers,    often because they seemed so far off the point. Problems like    intractability and    commonsense knowledge seemed much    more immediate and serious. It was unclear what difference    \"know how\" or \"intentionality\" made to an actual computer    program. Minsky said of Dreyfus and Searle \"they    misunderstand, and should be ignored.\"[94]    Dreyfus, who taught at MIT, was given a cold shoulder: he later said    that AI researchers \"dared not be seen having lunch with    me.\"[95]Joseph Weizenbaum, the author of    ELIZA, felt his    colleagues' treatment of Dreyfus was unprofessional and childish.    Although he was an outspoken critic of Dreyfus' positions, he    \"deliberately made it plain that theirs was not the way to    treat a human being.\"[96]  <\/p>\n<p>    Weizenbaum began to have serious ethical doubts about AI when    Kenneth    Colby wrote DOCTOR,    a chatterbot    therapist. Weizenbaum was disturbed that Colby saw his mindless    program as a serious therapeutic tool. A feud began, and the    situation was not helped when Colby did not credit Weizenbaum    for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human    Reason which argued that the misuse of artificial    intelligence has the potential to devalue human life.[97]  <\/p>\n<p>    A perceptron    was a form of neural network    introduced in 1958 by Frank Rosenblatt, who had been a    schoolmate of Marvin Minsky at the Bronx High School of    Science. Like most AI researchers, he was optimistic about    their power, predicting that \"perceptron may eventually be able    to learn, make decisions, and translate languages.\" An active    research program into the paradigm was carried out throughout    the 60s but came to a sudden halt with the publication of    Minsky    and Papert's 1969 book Perceptrons. It suggested that    there were severe limitations to what perceptrons could do and    that Frank Rosenblatt's predictions had been    grossly exaggerated. The effect of the book was devastating:    virtually no research at all was done in connectionism    for 10 years. Eventually, a new generation of researchers would    revive the field and thereafter it would become a vital and    useful part of artificial intelligence. Rosenblatt would not live to see this,    as he died in a boating accident shortly after the book was    published.[73]  <\/p>\n<p>    Logic was introduced into AI research as early as 1958, by    John McCarthy in his    Advice Taker proposal.[98] In    1963, J. Alan Robinson had discovered a    simple method to implement deduction on computers, the resolution and unification algorithm. However,    straightforward implementations, like those attempted by    McCarthy and his students in the late 60s, were especially    intractable: the programs required astronomical numbers of    steps to prove simple theorems.[99] A more    fruitful approach to logic was developed in the 1970s by    Robert    Kowalski at the University of Edinburgh,    and soon this led to the collaboration with French researchers    Alain    Colmerauer and Philippe Roussel    who created the successful logic programming language Prolog.[100]    Prolog uses a subset of logic (Horn clauses, closely related to    \"rules\" and \"production rules\")    that permit tractable computation. Rules would continue to be    influential, providing a foundation for Edward    Feigenbaum's expert systems and the    continuing work by Allen Newell and Herbert A.    Simon that would lead to Soar and their unified theories of    cognition.[101]  <\/p>\n<p>    Critics of the logical approach noted, as Dreyfus had,    that human beings rarely used logic when they solved problems.    Experiments by psychologists like Peter    Wason, Eleanor Rosch, Amos Tversky, Daniel    Kahneman and others provided proof.[102]    McCarthy responded that what people do is irrelevant. He argued    that what is really needed are machines that can solve    problemsnot machines that think as people do.[103]  <\/p>\n<p>    Among the critics of McCarthy's approach    were his colleagues across the country at MIT. Marvin Minsky,    Seymour    Papert and Roger Schank were trying to solve problems    like \"story understanding\" and \"object recognition\" that    required a machine to think like a person. In order to    use ordinary concepts like \"chair\" or \"restaurant\" they had to    make all the same illogical assumptions that people normally    made. Unfortunately, imprecise concepts like these are hard to    represent in logic. Gerald Sussman    observed that \"using precise language to describe essentially    imprecise concepts doesn't make them any more precise.\"[104]Schank described their \"anti-logic\"    approaches as \"scruffy\", as opposed to the \"neat\" paradigms used by McCarthy, Kowalski,    Feigenbaum, Newell and    Simon.[105]  <\/p>\n<p>    In 1975, in a seminal paper, Minsky noted that many of his fellow    \"scruffy\" researchers were using the same kind of tool: a    framework that captures all our common sense assumptions about    something. For example, if we use the concept of a bird, there    is a constellation of facts that immediately come to mind: we    might assume that it flies, eats worms and so on. We know these    facts are not always true and that deductions using these facts    will not be \"logical\", but these structured sets of assumptions    are part of the context of everything we say and think.    He called these structures \"frames\". Schank used a    version of frames he called \"scripts\" to    successfully answer questions about short stories in    English.[106] Many years later object-oriented programming    would adopt the essential idea of \"inheritance\" from AI    research on frames.  <\/p>\n<p>    In the 1980s a form of AI program called \"expert systems\"    was adopted by corporations around the world and knowledge became the focus of    mainstream AI research. In those same years, the Japanese    government aggressively funded AI with its fifth generation computer    project. Another encouraging event in the early 1980s was the    revival of connectionism in the work of John Hopfield    and David Rumelhart. Once again, AI had    achieved success.  <\/p>\n<p>    An expert    system is a program that answers questions or solves    problems about a specific domain of knowledge, using logical    rules that are    derived from the knowledge of experts. The earliest examples    were developed by Edward Feigenbaum and his students.    Dendral, begun in    1965, identified compounds from spectrometer readings. MYCIN,    developed in 1972, diagnosed infectious blood diseases. They    demonstrated the feasibility of the approach.[107]  <\/p>\n<p>    Expert systems restricted themselves to a small domain of    specific knowledge (thus avoiding the commonsense knowledge problem) and    their simple design made it relatively easy for programs to be    built and then modified once they were in place. All in all,    the programs proved to be useful: something that AI had    not been able to achieve up to this point.[108]  <\/p>\n<p>    In 1980, an expert system called XCON was completed at CMU for the Digital Equipment    Corporation. It was an enormous success: it was saving the    company 40 million dollars annually by 1986.[109] Corporations around the world    began to develop and deploy expert systems and by 1985 they    were spending over a billion dollars on AI, most of it to    in-house AI departments. An industry grew up to support them,    including hardware companies like Symbolics and Lisp Machines    and software companies such as IntelliCorp and Aion.[110]  <\/p>\n<p>    The power of expert systems came from the expert knowledge they    contained. They were part of a new direction in AI research    that had been gaining ground throughout the 70s. \"AI    researchers were beginning to suspectreluctantly, for it    violated the scientific canon of parsimonythat intelligence    might very well be based on the ability to use large amounts of    diverse knowledge in different ways,\"[111]    writes Pamela McCorduck. \"[T]he great lesson    from the 1970s was that intelligent behavior depended very much    on dealing with knowledge, sometimes quite detailed knowledge,    of a domain where a given task lay\".[112]Knowledge based systems and    knowledge engineering became a    major focus of AI research in the 1980s.[113]  <\/p>\n<p>    The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem    directly, by creating a massive database that would contain all    the mundane facts that the average person knows. Douglas Lenat,    who started and led the project, argued that there is no    shortcut  the only way for machines to know the meaning of    human concepts is to teach them, one concept at a time, by    hand. The project was not expected to be completed for many    decades.[114]  <\/p>\n<p>    Chess playing programs HiTech and Deep Thought defeated chess    masters in 1989. Both were developed by Carnegie Mellon    University; Deep Thought development paved the way for the    Deep Blue.[115]  <\/p>\n<p>    In 1981, the Japanese    Ministry of International Trade and Industry set aside $850    million for the Fifth generation computer    project. Their objectives were to write programs and build    machines that could carry on conversations, translate    languages, interpret pictures, and reason like human    beings.[116] Much to the chagrin of    scruffies, they chose Prolog as the primary    computer language for the project.[117]  <\/p>\n<p>    Other countries responded with new programs of their own. The    UK began the 350 million Alvey project. A consortium of American companies    formed the Microelectronics    and Computer Technology Corporation (or \"MCC\") to fund    large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the    Strategic Computing    Initiative and tripling its investment in AI between 1984    and 1988.[120]  <\/p>\n<p>    In 1982, physicist John Hopfield was able to prove that a form    of neural network (now called a \"Hopfield net\")    could learn and process information in a completely new way.    Around the same time, David Rumelhart popularized a new method    for training neural networks called \"backpropagation\" (discovered years    earlier by Paul    Werbos). These two discoveries revived the field of    connectionism which had been largely    abandoned since 1970.[119][121]  <\/p>\n<p>    The new field was unified and inspired by the appearance of    Parallel Distributed Processing in 1986a two volume    collection of papers edited by Rumelhart and psychologist James McClelland. Neural    networks would become commercially successful in the 1990s,    when they began to be used as the engines driving programs like    optical character    recognition and speech recognition.[119][122]  <\/p>\n<p>    The business community's fascination with AI rose and fell in    the 80s in the classic pattern of an economic    bubble. The collapse was in the perception of AI by    government agencies and investors  the field continued to make    advances despite the criticism. Rodney Brooks and Hans Moravec,    researchers from the related field of robotics, argued for an entirely new    approach to artificial intelligence.  <\/p>\n<p>    The term \"AI    winter\" was coined by researchers who had survived the    funding cuts of 1974 when they became concerned that enthusiasm    for expert systems had spiraled out of control and that    disappointment would certainly follow.[123]    Their fears were well founded: in the late 80s and early 90s,    AI suffered a series of financial setbacks.  <\/p>\n<p>    The first indication of a change in weather was the sudden    collapse of the market for specialized AI hardware in 1987.    Desktop computers from Apple and IBM had been steadily gaining speed    and power and in 1987 they became more powerful than the more    expensive Lisp machines made by Symbolics and others.    There was no longer a good reason to buy them. An entire    industry worth half a billion dollars was demolished    overnight.[124]  <\/p>\n<p>    Eventually the earliest successful expert systems, such as    XCON,    proved too expensive to maintain. They were difficult to    update, they could not learn, they were \"brittle\" (i.e., they could make    grotesque mistakes when given unusual inputs), and they fell    prey to problems (such as the qualification problem) that had    been identified years earlier. Expert systems proved useful,    but only in a few special contexts.[125]  <\/p>\n<p>    In the late 80s, the Strategic Computing    Initiative cut funding to AI \"deeply and brutally.\" New    leadership at DARPA had    decided that AI was not \"the next wave\" and directed funds    towards projects that seemed more likely to produce immediate    results.[126]  <\/p>\n<p>    By 1991, the impressive list of goals penned in 1981 for    Japan's Fifth Generation Project had    not been met. Indeed, some of them, like \"carry on a casual    conversation\" had not been met by 2010.[127] As with other AI    projects, expectations had run much higher than what was    actually possible.[127]  <\/p>\n<p>    In the late 80s, several researchers advocated a completely new    approach to artificial intelligence, based on robotics.[128] They believed that, to show    real intelligence, a machine needs to have a body  it    needs to perceive, move, survive and deal with the world. They    argued that these sensorimotor skills are essential to higher    level skills like commonsense reasoning and that    abstract reasoning was actually the least interesting or    important human skill (see Moravec's paradox). They    advocated building intelligence \"from the bottom up.\"[129]  <\/p>\n<p>    The approach revived ideas from cybernetics and    control    theory that had been unpopular since the sixties. Another    precursor was David Marr, who had    come to MIT in the late 70s from a successful background in    theoretical neuroscience to lead the group studying vision. He    rejected all symbolic approaches (both McCarthy's logic and    Minsky's frames), arguing that AI needed to    understand the physical machinery of vision from the bottom up    before any symbolic processing took place. (Marr's work would    be cut short by leukemia in 1980.)[130]  <\/p>\n<p>    In a 1990 paper, \"Elephants Don't Play Chess,\"[131] robotics researcher Rodney Brooks    took direct aim at the physical symbol system    hypothesis, arguing that symbols are not always necessary    since \"the world is its own best model. It is always exactly up    to date. It always has every detail there is to be known. The    trick is to sense it appropriately and often enough.\"[132] In the 80s and 90s, many    cognitive scientists also rejected the    symbol processing model of the mind and argued that the body    was essential for reasoning, a theory called the embodied mind thesis.[133]  <\/p>\n<p>    The field of AI, now more than a half a century old, finally    achieved some of its oldest goals. It began to be used    successfully throughout the technology industry, although    somewhat behind the scenes. Some of the success was due to    increasing computer power and some was achieved by focusing on    specific isolated problems and pursuing them with the highest    standards of scientific accountability. Still, the reputation    of AI, in the business world at least, was less than pristine.    Inside the field there was little agreement on the reasons for    AI's failure to fulfill the dream of human level intelligence    that had captured the imagination of the world in the 1960s.    Together, all these factors helped to fragment AI into    competing subfields focused on particular problems or    approaches, sometimes even under new names that disguised the    tarnished pedigree of \"artificial intelligence\".[134] AI was both more cautious and    more successful than it had ever been.  <\/p>\n<p>    On 11 May 1997, Deep Blue became the    first computer chess-playing system to beat a reigning world    chess champion, Garry Kasparov.[135] The    super computer was a specialized version of a framework    produced by IBM, and was capable of processing twice as many    moves per second as it had during the first match (which Deep    Blue had lost), reportedly 200,000,000 moves per second. The    event was broadcast live over the internet and received over 74    million hits.[136]  <\/p>\n<p>    In 2005, a Stanford robot won the DARPA Grand Challenge by driving    autonomously for 131 miles along an unrehearsed desert    trail.[137] Two years later, a team from    CMU won the DARPA Urban    Challenge by autonomously navigating 55 miles in an Urban    environment while adhering to traffic hazards and all traffic    laws.[138] In February 2011, in a    Jeopardy!    quiz show exhibition match, IBM's question    answering system, Watson,    defeated the two greatest Jeopardy! champions, Brad Rutter and    Ken    Jennings, by a significant margin.[139]  <\/p>\n<p>    These successes were not due to some revolutionary new    paradigm, but mostly on the tedious application of engineering    skill and on the tremendous power of computers today.[140] In fact, Deep Blue's computer was 10 million times    faster than the Ferranti Mark 1 that Christopher Strachey taught to play    chess in 1951.[141] This dramatic increase is    measured by Moore's law, which predicts that the speed    and memory capacity of computers doubles every two years. The    fundamental problem of \"raw computer power\" was slowly being    overcome.  <\/p>\n<p>    A new paradigm called \"intelligent agents\" became widely    accepted during the 90s.[142] Although    earlier researchers had proposed modular \"divide and conquer\"    approaches to AI,[143] the    intelligent agent did not reach its    modern form until Judea Pearl, Allen Newell and others brought    concepts from decision theory and economics into the    study of AI.[144] When    the economist's    definition of a rational agent was married to computer    science's definition of an object or module, the intelligent    agent paradigm was complete.  <\/p>\n<p>    An intelligent agent is a system that    perceives its environment and takes actions which maximize its    chances of success. By this definition, simple programs that    solve specific problems are \"intelligent agents\", as are human    beings and organizations of human beings, such as firms. The    intelligent agent paradigm defines AI    research as \"the study of intelligent agents\". This is a    generalization of some earlier definitions of AI: it goes    beyond studying human intelligence; it studies all kinds of    intelligence.[145]  <\/p>\n<p>    The paradigm gave researchers license to study isolated    problems and find solutions that were both verifiable and    useful. It provided a common language to describe problems and    share their solutions with each other, and with other fields    that also used concepts of abstract agents, like economics and    control    theory. It was hoped that a complete agent    architecture (like Newell's SOAR) would one day allow    researchers to build more versatile and intelligent systems out    of interacting intelligent    agents.[144][146]  <\/p>\n<p>    AI researchers began to develop and use sophisticated    mathematical tools more than they ever had in the past.[147] There was a widespread    realization that many of the problems that AI needed to solve    were already being worked on by researchers in fields like    mathematics, economics or operations research. The shared    mathematical language allowed both a higher level of    collaboration with more established and successful fields and    the achievement of results which were measurable and provable;    AI had become a more rigorous \"scientific\" discipline. Russell & Norvig (2003)    describe this as nothing less than a \"revolution\" and \"the    victory of the neats\".[148][149]  <\/p>\n<p>    Judea    Pearl's highly influential 1988 book[150]    brought probability and decision    theory into AI. Among the many new tools in use were    Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical    optimization. Precise    mathematical descriptions were also developed for \"computational intelligence\"    paradigms like neural networks and    evolutionary algorithms.[148]  <\/p>\n<p>    Algorithms originally developed by AI researchers began to    appear as parts of larger systems. AI had solved a lot of very    difficult problems[151] and their    solutions proved to be useful throughout the technology    industry,[152] such as data mining,    industrial robotics, logistics,[153]speech    recognition,[154] banking    software,[155]    medical diagnosis[155]    and Google's search    engine.[156]  <\/p>\n<p>    The field of AI receives little or no credit for these    successes. Many of AI's greatest innovations have been reduced    to the status of just another item in the tool chest of    computer science.[157]Nick Bostrom    explains \"A lot of cutting edge AI has filtered into general    applications, often without being called AI because once    something becomes useful enough and common enough it's not    labeled AI anymore.\"[158]  <\/p>\n<p>    Many researchers in AI in 1990s deliberately called their work    by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In    part, this may be because they considered their field to be    fundamentally different from AI, but also the new names help to    procure funding. In the commercial world at least, the failed    promises of the AI Winter continue to haunt AI research,    as the New York Times reported in 2005: \"Computer scientists    and software engineers avoided the term artificial intelligence    for fear of being viewed as wild-eyed dreamers.\"[159][160][161]  <\/p>\n<p>    In 1968, Arthur C. Clarke and Stanley    Kubrick had imagined that by the year 2001, a machine would exist with an    intelligence that matched or exceeded the capability of human    beings. The character they created, HAL 9000, was based on a belief shared by    many leading AI researchers that such a machine would exist by    the year 2001.[162]  <\/p>\n<p>    Marvin    Minsky asks \"So the question is why didn't we get HAL in    2001?\"[163] Minsky believes that the    answer is that the central problems, like commonsense reasoning, were being    neglected, while most researchers pursued things like    commercial applications of neural nets or genetic algorithms. John McCarthy, on the    other hand, still blames the qualification problem.[164] For Ray Kurzweil, the    issue is computer power and, using Moore's Law, he predicts that machines with    human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net    research ignores the essential properties of the human cortex,    preferring simple models that have been successful at solving    simple problems.[166] There are    many other explanations and for each there is a corresponding    research program underway.  <\/p>\n<p>    .  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to read the rest:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/en.wikipedia.org\/wiki\/History_of_artificial_intelligence\" title=\"History of artificial intelligence - Wikipedia, the free ...\">History of artificial intelligence - Wikipedia, the free ...<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with \"an ancient wish to forge the gods.\" The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/history-of-artificial-intelligence-wikipedia-the-free\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-173549","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/173549"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=173549"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/173549\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=173549"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=173549"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=173549"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}