{"id":208076,"date":"2017-07-26T16:18:28","date_gmt":"2017-07-26T20:18:28","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-winter-wikipedia\/"},"modified":"2017-07-26T16:18:28","modified_gmt":"2017-07-26T20:18:28","slug":"ai-winter-wikipedia","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-winter-wikipedia\/","title":{"rendered":"AI winter &#8211; Wikipedia"},"content":{"rendered":"<p><p>    In the history of    artificial intelligence, an AI winter is a period of    reduced funding and interest in artificial intelligence    research.[1] The term was coined by analogy    to the idea of a nuclear winter. The field has experienced    several hype    cycles, followed by disappointment and criticism, followed    by funding cuts, followed by renewed interest years or decades    later.  <\/p>\n<p>    The term first appeared in 1984 as the    topic of a public debate at the annual meeting of AAAI    (then called the \"American Association of Artificial    Intelligence\"). It is a chain reaction that begins with    pessimism in the AI community, followed by pessimism in the    press, followed by a severe cutback in funding, followed by the    end of serious research. At the meeting, Roger Schank and    Marvin    Minskytwo leading AI researchers who had survived the    \"winter\" of the 1970swarned the business community that    enthusiasm for AI had spiraled out of control in the '80s and    that disappointment would certainly follow. Three years later,    the billion-dollar AI industry began to collapse.  <\/p>\n<p>    Hypes are common in many emerging technologies, such as the    railway mania or the dot-com    bubble. The AI winter is primarily a collapse in the    perception of AI by government bureaucrats and venture    capitalists. Despite the rise and fall of AI's reputation, it    has continued to develop new and successful technologies. AI    researcher Rodney Brooks would complain in 2002 that    \"there's this stupid myth out there that AI has failed, but AI    is around you every second of the day.\" In 2005, Ray Kurzweil    agreed: \"Many observers still think that the AI winter was the    end of the story and that nothing since has come of the AI    field. Yet today many thousands of AI applications are deeply    embedded in the infrastructure of every industry.\"  <\/p>\n<p>    Enthusiasm and optimism about AI has gradually increased since    its low point in 1990, and by the 2010s artificial intelligence    (and especially the sub-field of machine    learning) became widely used, well-funded and many in the    technology predict that it will soon succeed in creating    machines with artificial general    intelligence. As Ray Kurzweil writes: \"the AI winter is    long since over.\"  <\/p>\n<p>    There were two major winters in 197480 and 198793[6] and several smaller episodes,    including:  <\/p>\n<p>    During the Cold    War, the US government was particularly interested in the    automatic, instant translation of Russian documents and    scientific reports. The government aggressively supported    efforts at machine translation starting in 1954. At the outset,    the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining    the translation process and there were \"many predictions of    imminent 'breakthroughs'\".[7]  <\/p>\n<p>    However, researchers had underestimated the profound difficulty    of word-sense disambiguation. In    order to translate a sentence, a machine needed to have some    idea what the sentence was about, otherwise it made mistakes.    An anecdotal example was \"the spirit is willing but the flesh    is weak.\" Translated back and forth with Russian, it became    \"the vodka is good but the meat is rotten.\" Similarly, \"out of    sight, out of mind\" became \"blind idiot\". Later researchers    would call this the commonsense    knowledge problem.  <\/p>\n<p>    By 1964, the National Research    Council had become concerned about the lack of progress and    formed the Automatic    Language Processing Advisory Committee (ALPAC) to look into the    problem. They concluded, in a famous 1966 report, that machine    translation was more expensive, less accurate and slower than    human translation. After spending some 20 million dollars, the    NRC ended all support. Careers were destroyed and research    ended.[7]  <\/p>\n<p>    Machine translation is still an open research problem in the 21st    century, which has been met with some success (Google    Translate, Yahoo Babel Fish).  <\/p>\n<p>    Some of the earliest work in AI used networks or circuits of    connected units to simulate intelligent behavior. Examples of    this kind of work, called \"connectionism\", include Walter Pitts and    Warren McCullough's first description    of a neural network for logic and Marvin Minsky's    work on the SNARC    system. In the late '50s, most of these approaches were    abandoned when researchers began to explore symbolic reasoning as the    essence of intelligence, following the success of programs like    the Logic    Theorist and the General Problem    Solver.[9]  <\/p>\n<p>    However, one type of connectionist work continued: the study of    perceptrons, invented by Frank Rosenblatt,    who kept the field alive with his salesmanship and the sheer    force of his personality.[10] He    optimistically predicted that the perceptron \"may eventually be    able to learn, make decisions, and translate    languages\".[11]    Mainstream research into perceptrons came to an abrupt end in    1969, when Marvin Minsky and Seymour    Papert published the book Perceptrons, which was    perceived as outlining the limits of what perceptrons could do.  <\/p>\n<p>    Connectionist approaches were abandoned for the next decade or    so. While important work, such as Paul Werbos' discovery of backpropagation, continued in a limited    way, major funding for connectionist projects was difficult to    find in the 1970s and early '80s.[12] The \"winter\"    of connectionist research came to an end in the middle '80s,    when the work of John Hopfield, David    Rumelhart and others revived large scale interest in neural    networks.[13] Rosenblatt did not live to see    this, however, as he died in a boating accident shortly after    Perceptrons was published.[11]  <\/p>\n<p>    In 1973, professor Sir James Lighthill was asked by the UK    Parliament to evaluate    the state of AI research in the United Kingdom. His report, now    called the Lighthill report, criticized the utter failure of AI    to achieve its \"grandiose objectives.\" He concluded that    nothing being done in AI couldn't be done in other sciences. He    specifically mentioned the problem of \"combinatorial explosion\" or    \"intractability\", which    implied that many of AI's most successful algorithms would    grind to a halt on real world problems and were only suitable    for solving \"toy\" versions.[14]  <\/p>\n<p>    The report was contested in a debate broadcast in the BBC    \"Controversy\" series in 1973. The debate \"The general purpose    robot is a mirage\" from the Royal Institution was Lighthill    versus the team of Donald Michie, John McCarthy and    Richard    Gregory.[15] McCarthy later    wrote that \"the combinatorial explosion problem has been    recognized in AI from the beginning.\"[16]  <\/p>\n<p>    The report led to the complete dismantling of AI research in    England.[14] AI    research continued in only a few top universities (Edinburgh,    Essex and Sussex). This \"created a bow-wave effect that led to    funding cuts across Europe,\" writes James    Hendler.[17]    Research would not revive on a large scale until 1983, when    Alvey (a research    project of the British Government) began to fund AI again from    a war chest of 350 million in response to the Japanese Fifth    Generation Project (see below). Alvey had a number of UK-only    requirements which did not sit well internationally, especially    with US partners, and lost Phase 2 funding.  <\/p>\n<p>    During the 1960s, the Defense Advanced    Research Projects Agency (then known as \"ARPA\", now known    as \"DARPA\") provided millions of dollars for AI research with    almost no strings attached. DARPA's director in those years,    J. C. R. Licklider believed in    \"funding people, not projects\"[18] and allowed    AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A.    Simon or Allen Newell) to spend it almost any way    they liked.  <\/p>\n<p>    This attitude changed after the passage of Mansfield Amendment in 1969, which    required DARPA to fund \"mission-oriented direct research,    rather than basic undirected research.\"[19] Pure undirected research    of the kind that had gone on in the '60s would no longer be    funded by DARPA. Researchers now had to show that their work    would soon produce some useful military technology. AI research    proposals were held to a very high standard. The situation was    not helped when the Lighthill report and DARPA's own study (the    American Study    Group) suggested that most AI research was unlikely to    produce anything truly useful in the foreseeable future.    DARPA's money was directed at specific projects with    identifiable goals, such as autonomous tanks and battle    management systems. By 1974, funding for AI projects was hard    to find.[19]  <\/p>\n<p>    AI researcher Hans Moravec blamed the crisis on the    unrealistic predictions of his colleagues: \"Many researchers    were caught up in a web of increasing exaggeration. Their    initial promises to DARPA had been much too optimistic. Of    course, what they delivered stopped considerably short of that.    But they felt they couldn't in their next proposal promise less    than in the first one, so they promised more.\"[20] The result, Moravec claims, is    that some of the staff at DARPA had lost patience with AI    research. \"It was literally phrased at DARPA that 'some of    these people were going to be taught a lesson [by] having their    two-million-dollar-a-year contracts cut to almost nothing!'\"    Moravec told Daniel Crevier.[21]  <\/p>\n<p>    While the autonomous tank project was a failure, the battle    management system (the Dynamic Analysis and    Replanning Tool) proved to be enormously successful, saving    billions in the first Gulf War, repaying all of DARPAs investment in    AI[22] and justifying DARPA's pragmatic    policy.[23]  <\/p>\n<p>    DARPA was deeply disappointed with researchers working on the    Speech Understanding Research program at Carnegie Mellon    University. DARPA had hoped for, and felt it had been promised,    a system that could respond to voice commands from a pilot. The    SUR team had developed a system which could recognize spoken    English, but only if the words were spoken in a particular    order. DARPA felt it had been duped and, in 1974, they    cancelled a three million dollar a year grant.[24]  <\/p>\n<p>    Many years later, successful commercial speech recognition    systems would use the technology developed by the Carnegie    Mellon team (such as hidden Markov    models) and the market for speech recognition systems would    reach $4 billion by 2001.[25]  <\/p>\n<p>    In the 1980s, a form of AI program called an \"expert system\"    was adopted by corporations around the world. The first    commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment    Corporation, and it was an enormous success: it was    estimated to have saved the company 40 million dollars over    just six years of operation. Corporations around the world    began to develop and deploy expert systems and by 1985 they    were spending over a billion dollars on AI, most of it to    in-house AI departments. An industry grew up to support them,    including software companies like Teknowledge and    Intellicorp (KEE), and hardware    companies like Symbolics and Lisp Machines Inc. who built    specialized computers, called Lisp machines,    that were optimized to process the programming language    Lisp, the preferred    language for AI.[26]  <\/p>\n<p>    In 1987, three years after Minsky and Schank's prediction, the market for specialized AI    hardware collapsed. Workstations by companies like Sun    Microsystems offered a powerful alternative to LISP machines    and companies like Lucid offered a LISP environment for this    new class of workstations. The performance of these general    workstations became an increasingly difficult challenge for    LISP Machines. Companies like Lucid and Franz Lisp offered increasingly more powerful    versions of LISP. For example, benchmarks were published    showing workstations maintaining a performance advantage over    LISP machines.[27] Later desktop    computers built by Apple and IBM would also offer a simpler and    more popular architecture to run LISP applications on. By 1987    they had become more powerful than the more expensive Lisp    machines. The desktop computers had rule-based engines such as    CLIPS    available.[28] These    alternatives left consumers with no reason to buy an expensive    machine specialized for running LISP. An entire industry worth    half a billion dollars was replaced in a single year.[29]  <\/p>\n<p>    Commercially, many Lisp companies failed,    like Symbolics, Lisp Machines Inc., Lucid Inc., etc. Other    companies, like Texas Instruments and Xerox abandoned the field.    However, a number of customer companies (that is, companies    using systems written in Lisp and developed on Lisp machine    platforms) continued to maintain systems. In some cases, this    maintenance involved the assumption of the resulting support    work.  <\/p>\n<p>    By the early 90s, the earliest successful expert systems, such    as XCON, proved too expensive to maintain. They were difficult    to update, they could not learn, they were \"brittle\" (i.e.,    they could make grotesque mistakes when given unusual inputs),    and they fell prey to problems (such as the qualification problem) that had    been identified years earlier in research in nonmonotonic logic. Expert systems    proved useful, but only in a few special contexts.[1][30] Another    problem dealt with the computational hardness of truth maintenance efforts for    general knowledge. KEE used an assumption-based approach (see        NASA, TEXSYS) supporting multiple-world scenarios that was    difficult to understand and apply.  <\/p>\n<p>    The few remaining expert system shell companies were eventually    forced to downsize and search for new markets and software    paradigms, like case based    reasoning or universal database access. The maturation of Common Lisp    saved many systems such as ICAD which found application in    knowledge-based engineering.    Other systems, such as Intellicorp's KEE, moved from Lisp to a    C++ (variant) on the PC and helped establish object-oriented technology    (including providing major support for the development of    UML).  <\/p>\n<p>    In 1981, the Japanese    Ministry of International Trade and Industry set aside $850    million for the Fifth generation computer    project. Their objectives were to write programs and build    machines that could carry on conversations, translate    languages, interpret pictures, and reason like human beings. By    1991, the impressive list of goals penned in 1981 had not been    met. Indeed, some of them had not been met in 2001, or 2011. As    with other AI projects, expectations had run much higher than    what was actually possible.[31]  <\/p>\n<p>    In 1983, in response to the fifth generation project, DARPA    again began to fund AI research through the Strategic Computing    Initiative. As originally proposed the project would begin with    practical, achievable goals, which even included artificial    general intelligence as long term objective. The program was    under the direction of the Information    Processing Technology Office (IPTO) and was also directed    at supercomputing and microelectronics. By 1985 it had spent    $100 million and 92 projects were underway at 60 institutions,    half in industry, half in universities and government labs. AI    research was generously funded by the SCI.[32]  <\/p>\n<p>    Jack Schwarz, who ascended to the leadership of IPTO in 1987,    dismissed expert systems as \"clever programming\" and cut    funding to AI \"deeply and brutally,\" \"eviscerating\" SCI.    Schwarz felt that DARPA should focus its funding only on those    technologies which showed the most promise, in his words, DARPA    should \"surf\", rather than \"dog paddle\", and he felt strongly    AI was not \"the next wave\". Insiders in the program    cited problems in communication, organization and integration.    A few projects survived the funding cuts, including pilot's    assistant and an autonomous land vehicle (which were never    delivered) and the DART battle management system, which (as    noted above) was successful.[33]  <\/p>\n<p>    A survey of reports from the mid-2000s suggests that AI's    reputation was still less than stellar:  <\/p>\n<p>    Many researchers in AI in the mid 2000s deliberately    called their work by other names, such as informatics, machine    learning, analytics, knowledge-based systems,    business    rules management, cognitive systems,    intelligent systems, intelligent agents    or computational intelligence, to    indicate that their work emphasizes particular tools or is    directed at a particular sub-problem. Although this may be    partly because they consider their field to be fundamentally    different from AI, it is also true that the new names help to    procure funding by avoiding the stigma of false promises    attached to the name \"artificial intelligence.\"[36]  <\/p>\n<p>    \"Many observers still think that the AI winter was the end of    the story and that nothing since come of the AI field,\" wrote    Ray Kurzweil in 2005, \"yet today many thousands of AI    applications are deeply embedded in the infrastructure of every    industry.\" In the late '90s and early 21st century, AI    technology became widely used as elements of larger    systems,[37] but the field is rarely credited    for these successes. In 2006, Nick Bostrom explained that \"a lot of    cutting edge AI has filtered into general applications, often    without being called AI because once something becomes useful    enough and common enough it's not labeled AI anymore.\"[38] Rodney Brooks stated around the    same time that \"there's this stupid myth out there that AI has    failed, but AI is around you every second of the day.\"  <\/p>\n<p>    Technologies developed by AI researchers have achieved    commercial success in a number of domains, such as machine    translation, data mining, industrial robotics, logistics,[39] speech recognition,[40] banking software,[41] medical    diagnosis[41]    and Google's search    engine.[42]  <\/p>\n<p>    Fuzzy logic    controllers have been developed for automatic gearboxes in    automobiles (the 2006 Audi TT, VW Touareg[43] and VW    Caravell feature the DSP transmission which utilizes fuzzy    logic, a number of koda variants (koda Fabia)    also currently include a fuzzy logic-based controller). Camera    sensors widely utilize fuzzy logic to enable focus.  <\/p>\n<p>    Heuristic search and data analytics are both technologies that    have developed from the evolutionary    computing and machine learning subdivision of the AI    research community. Again, these techniques have been applied    to a wide range of real world problems with considerable    commercial success.  <\/p>\n<p>    In the case of Heuristic Search, ILOG has developed a large number of applications    including deriving job shop schedules for many manufacturing    installations.[44] Many telecommunications    companies also make use of this technology in the management of    their workforces, for example BT Group has deployed heuristic search[45] in a scheduling    application that provides the work schedules of 20,000    engineers.  <\/p>\n<p>    Data analytics technology utilizing algorithms for the    automated formation of classifiers that were developed in the    supervised machine learning community in the 1990s (for    example, TDIDT, Support Vector Machines, Neural Nets, IBL) are    now[when?]    used pervasively by companies for marketing survey targeting    and discovery of trends and features in data sets.  <\/p>\n<p>    Primarily the way researchers and economists judge the status    of an AI winter is by reviewing which AI projects are being    funded, how much and by whom. Trends in funding are often set    by major funding agencies in the developed world. Currently,    DARPA and a civilian funding program called EU-FP7 provide much of the    funding for AI research in the US and European    Union.  <\/p>\n<p>    As of 2007, DARPA was soliciting AI research proposals under a    number of programs including The    Grand Challenge Program, Cognitive    Technology Threat Warning System (CT2WS), \"Human    Assisted Neural Devices (SN07-43)\", \"Autonomous    Real-Time Ground Ubiquitous Surveillance-Imaging System    (ARGUS-IS)\" and \"Urban Reasoning and Geospatial    Exploitation Technology (URGENT)\"  <\/p>\n<p>    Perhaps best known is DARPA's Grand Challenge Program[46] which has developed fully    automated road vehicles that can successfully navigate real    world terrain[47] in a fully autonomous fashion.  <\/p>\n<p>    DARPA has also supported programs on the Semantic Web with    a great deal of emphasis on intelligent management of content    and automated understanding. However James Hendler,    the manager of the DARPA program at the time, expressed some    disappointment with the government's ability to create rapid    change, and moved to working with the World Wide Web Consortium to    transition the technologies to the private sector.  <\/p>\n<p>    The EU-FP7 funding program provides financial support to    researchers within the European Union. In 2007\/2008, it was    funding AI research under the Cognitive Systems: Interaction    and Robotics Programme (193m), the Digital Libraries and    Content Programme (203m) and the FET programme    (185m).[48]  <\/p>\n<p>    Concerns are sometimes raised that a new AI winter could be    triggered by any overly ambitious or unrealistic promise by    prominent AI scientists. For example, some researchers feared    that the widely publicized promises in the early 1990s that    Cog would show the intelligence of a    human two-year-old might lead to an AI winter.  <\/p>\n<p>    James Hendler in 2008, observed that AI funding both in the EU    and the US were being channeled more into applications and    cross-breeding with traditional sciences, such as    bioinformatics.[28] This shift away    from basic research is happening at the same time as there's a    drive towards applications of e.g. the semantic web. Invoking the pipeline    argument, (see underlying causes) Hendler saw a parallel    with the '80s winter and warned of a coming AI winter in the    '10s.  <\/p>\n<p>    There are also constant reports that another AI spring    is imminent or has already occurred:  <\/p>\n<p>    Several explanations have been put forth for the cause of AI    winters in general. As AI progressed from government funded    applications to commercial ones, new dynamics came into play.    While hype is the most commonly cited cause, the    explanations are not necessarily mutually exclusive.  <\/p>\n<p>    The AI winters can[citation    needed] be partly understood as a sequence    of over-inflated expectations and subsequent crash seen in    stock-markets and exemplified[citation    needed] by the railway mania and dotcom    bubble. In a common pattern in development of new technology    (known as hype cycle), an event, typically a technological    breakthrough, creates publicity which feeds on itself to create    a \"peak of inflated expectations\" followed by a \"trough of    disillusionment\". Since scientific and technological progress    can't keep pace with the publicity-fueled increase in    expectations among investors and other stakeholders, a crash    must follow. AI technology seems to be no exception to this    rule.[citation    needed]  <\/p>\n<p>    Another factor is AI's place in the organisation of    universities. Research on AI often takes the form of interdisciplinary research.    One example is the Master of Artificial    Intelligence[53] program at    K.U. Leuven which involve lecturers from    Philosophy to Mechanical    Engineering. AI is therefore prone to the same problems    other types of interdisciplinary research face. Funding is    channeled through the established departments and during budget    cuts, there will be a tendency to shield the \"core contents\" of    each department, at the expense of interdisciplinary and less    traditional research projects.  <\/p>\n<p>    Downturns in a country's national economy cause budget cuts in    universities. The \"core contents\" tendency worsen the effect on    AI research and investors in the market are likely to put their    money into less risky ventures during a crisis. Together this    may amplify an economic downturn into an AI winter. It is worth    noting that the Lighthill report came at a time of economic    crisis in the UK,[54] when    universities had to make cuts and the question was only which    programs should go.  <\/p>\n<p>    Early in the computing history the potential for neural    networks was understood but it has never been realized. Fairly    simple networks require significant computing capacity even by    today's standards.  <\/p>\n<p>    It is common to see the relationship between basic research and    technology as a pipeline. Advances in basic research give birth    to advances in applied research, which in turn leads to new    commercial applications. From this it is often argued that a    lack of basic research will lead to a drop in marketable    technology some years down the line. This view was advanced by    James Hendler in 2008,[28] claiming that    the fall of expert systems in the late '80s were not due to an    inherent and unavoidable brittleness of expert systems, but to    funding cuts in basic research in the '70s. These expert    systems advanced in the '80s through applied research and    product development, but by the end of the decade, the pipeline    had run dry and expert systems were unable to produce    improvements that could have overcome the brittleness and    secured further funding.  <\/p>\n<p>    The fall of the Lisp machine market and the failure of the    fifth generation computers were cases of expensive advanced    products being overtaken by simpler and cheaper alternatives.    This fits the definition of a low-end disruptive technology, with the    Lisp machine makers being marginalized. Expert systems were    carried over to the new desktop computers by for instance    CLIPS, so the fall of    the Lisp machine market and the fall of expert systems are    strictly speaking two separate events. Still, the failure to    adapt to such a change in the outside computing milieu is cited    as one reason for the 1980s AI winter.[28]  <\/p>\n<p>    Several philosophers, cognitive scientists and computer    scientists have speculated on where AI might have failed and    what lies in its future. Hubert Dreyfus highlighted flawed    assumptions of AI research in the past and, as early as    1966, correctly predicted that the first wave of AI research    would fail to fulfill the very public promises it was making.    Others critics like Noam Chomsky have argued that AI is headed    in the wrong direction, in part because of its heavy reliance    on statistical techniques.[55] Chomsky's    comments fit into a larger debate with Peter Norvig,    centered around the role of statistical methods in AI. The    exchange between the two started with comments made by Chomsky    at a symposium at MIT[56] to which    Norvig wrote a response.[57]  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Visit link:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/en.wikipedia.org\/wiki\/AI_winter\" title=\"AI winter - Wikipedia\">AI winter - Wikipedia<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the \"American Association of Artificial Intelligence\") <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-winter-wikipedia\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-208076","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208076"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=208076"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208076\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=208076"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=208076"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=208076"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}