{"id":174737,"date":"2016-12-15T19:02:03","date_gmt":"2016-12-16T00:02:03","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/technological-singularity-wikipedia\/"},"modified":"2016-12-15T19:02:03","modified_gmt":"2016-12-16T00:02:03","slug":"technological-singularity-wikipedia","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/technological-singularity-wikipedia\/","title":{"rendered":"Technological singularity &#8211; Wikipedia"},"content":{"rendered":"<p><p>      The technological singularity (also, simply, the      singularity)[1] is the      hypothesis that the invention of artificial superintelligence will      abruptly trigger runaway technological growth, resulting in      unfathomable changes to human civilization.[2] According      to this hypothesis, an upgradable intelligent agent (such as      a computer running software-based artificial general      intelligence) would enter a 'runaway reaction' of      self-improvement cycles, with each new and more intelligent      generation appearing more and more rapidly, causing an      intelligence explosion and      resulting in a powerful superintelligence that would,      qualitatively, far surpass all human intelligence.    <\/p>\n<p>      John von Neumann first uses the term \"singularity\" in the      context of technological progress causing accelerating      change: The accelerating progress of technology and changes      in the mode of human life, give the appearance of approaching      some essential singularity in the history of the race beyond      which human affairs, as we know them, can not continue.[6]      Subsequent authors have echoed this view point.[2][3]I. J. Good's      \"intelligence explosion\", predicted that a future      superintelligence would trigger a singularity.[4]Science      fiction author Vernor Vinge said in his essay The      Coming Technological Singularity that this would signal      the end of the human era, as the new superintelligence would      continue to upgrade itself and would advance technologically      at an incomprehensible rate.[4]    <\/p>\n<p>      At the 2012 Singularity Summit, Stuart Armstrong      did a study of artificial general      intelligence (AGI) predictions by experts and found a      wide range of predicted dates, with a median value of      2040.[5]    <\/p>\n<p>      I. J. Good      speculated in 1965 that artificial general      intelligence might bring about an intelligence explosion.      Good's scenario runs as follows: as computers increase in      power, it becomes possible for people to build a machine that      is more intelligent than humanity; this superhuman      intelligence possesses greater problem-solving and inventive      skills than current humans are capable of. This      superintelligent machine then designs an even more capable      machine, or re-writes its own software to become even more      intelligent; this (ever more capable) machine then goes on to      design a machine of yet greater capability, and so on. These      iterations of recursive self-improvement accelerate, allowing      enormous qualitative change before any upper limits imposed      by the laws of physics or theoretical computation set      in.[6]    <\/p>\n<p>      John von Neumann, Vernor Vinge and Ray Kurzweil define the      concept in terms of the technological creation of      superintelligence. They argue that it is difficult or      impossible for present-day humans to predict what human      beings' lives would be like in a post-singularity      world.[4][7]    <\/p>\n<p>      Some writers use \"the singularity\" in a broader way to refer      to any radical changes in our society brought about by new      technologies such as molecular      nanotechnology,[8][9][10] although Vinge and      other writers specifically state that without      superintelligence, such changes would not qualify as a true      singularity.[4]      Many writers also tie the singularity to observations of      exponential growth in various technologies (with Moore's Law being the most prominent      example), using such observations as a basis for predicting      that the singularity is likely to happen sometime within the      21st century.[9][11]    <\/p>\n<p>      Many prominent technologists and academics dispute the      plausibility of a technological singularity, including      Paul Allen,      Jeff      Hawkins, John Holland, Jaron Lanier,      and Gordon      Moore, whose Moore's Law is often      cited in support of the concept.[12][13][14]    <\/p>\n<p>      The exponential growth in computing technology suggested by      Moore's Law is commonly cited as a reason to expect a      singularity in the relatively near future, and a number of      authors have proposed generalizations of Moore's Law.      Computer scientist and futurist Hans Moravec proposed in a 1998      book[15] that the exponential growth      curve could be extended back through earlier computing      technologies prior to the integrated circuit.    <\/p>\n<p>      Kurzweil postulates a law of accelerating      returns in which the speed of technological change (and      more generally, all evolutionary processes[16]) increases      exponentially, generalizing Moore's Law in the same manner as      Moravec's proposal, and also including material technology      (especially as applied to nanotechnology), medical      technology and others.[17] Between 1986 and      2007, machines' application-specific capacity to compute      information per capita roughly doubled every 14 months; the      per capita capacity of the world's general-purpose computers      has doubled every 18 months; the global telecommunication      capacity per capita doubled every 34 months; and the world's      storage capacity per capita doubled every 40 months.[18]    <\/p>\n<p>      Kurzweil reserves the term \"singularity\" for a rapid increase      in intelligence (as opposed to other technologies), writing      for example that \"The Singularity will allow us to transcend      these limitations of our biological bodies and brains ...      There will be no distinction, post-Singularity, between human      and machine\".[19] He also defines      his predicted date of the singularity (2045) in terms of when      he expects computer-based intelligences to significantly      exceed the sum total of human brainpower, writing that      advances in computing before that date \"will not represent      the Singularity\" because they do \"not yet correspond to a      profound expansion of our intelligence.\"[20]    <\/p>\n<p>      Some singularity proponents argue its inevitability through      extrapolation of past trends, especially those pertaining to      shortening gaps between improvements to technology. In one of      the first uses of the term \"singularity\" in the context of      technological progress, Ulam tells of a conversation with the      late John von Neumann about accelerating      change:    <\/p>\n<p>        One conversation centered on the ever accelerating progress        of technology and changes in the mode of human life, which        gives the appearance of approaching some essential        singularity in the history of the race beyond which human        affairs, as we know them, could not continue.[21]      <\/p>\n<p>      Kurzweil claims that technological progress follows a pattern      of exponential growth, following what      he calls the \"Law      of Accelerating Returns\". Whenever technology approaches      a barrier, Kurzweil writes, new technologies will surmount      it. He predicts paradigm shifts will become increasingly      common, leading to \"technological change so rapid and      profound it represents a rupture in the fabric of human      history\".[22] Kurzweil      believes that the singularity will occur by approximately      2045.[23] His predictions differ from      Vinge's in that he predicts a gradual ascent to the      singularity, rather than Vinge's rapidly self-improving      superhuman intelligence.    <\/p>\n<p>      Oft-cited dangers include those commonly associated with      molecular nanotechnology and genetic engineering. These threats      are major issues for both singularity advocates and critics,      and were the subject of Bill Joy's Wired magazine article      \"Why the      future doesn't need us\".[3][24]    <\/p>\n<p>      Some critics assert that no computer or machine will ever      achieve human intelligence, while others hold that the      definition of intelligence is irrelevant if the net result is      the same.[25]    <\/p>\n<p>      Steven      Pinker stated in 2008:    <\/p>\n<p>        (...) There is not the slightest reason to believe in a        coming singularity. The fact that you can visualize a        future in your imagination is not evidence that it is        likely or even possible. Look at domed cities, jet-pack        commuting, underwater cities, mile-high buildings, and        nuclear-powered automobilesall staples of futuristic        fantasies when I was a child that have never arrived. Sheer        processing power is not a pixie dust that magically solves        all your problems. (...)[12]      <\/p>\n<p>      University of      California, Berkeley, philosophy professor John Searle      writes:    <\/p>\n<p>        [Computers] have, literally [], no intelligence,        no motivation, no autonomy, and no agency. We design        them to behave as if they had certain sorts of psychology, but        there is no psychological reality to the corresponding        processes or behavior. [] [T]he machinery has no beliefs,        desires, [or] motivations.[26]      <\/p>\n<p>      Martin Ford in The Lights in the Tunnel: Automation,      Accelerating Technology and the Economy of the      Future[27]      postulates a \"technology paradox\" in that before the      singularity could occur most routine jobs in the economy      would be automated, since this would require a level of      technology inferior to that of the singularity. This would      cause massive unemployment and plummeting consumer demand,      which in turn would destroy the incentive to invest in the      technologies that would be required to bring about the      Singularity. Job displacement is increasingly no longer      limited to work traditionally considered to be      \"routine\".[28]    <\/p>\n<p>      Jared      Diamond, in Collapse:      How Societies Choose to Fail or Succeed, argues that      cultures self-limit when they exceed the sustainable carrying      capacity of their environment, and the consumption of      strategic resources (frequently timber, soils or water)      creates a deleterious positive feedback loop that leads      eventually to social collapse and technological      retrogression.[improper      synthesis?]    <\/p>\n<p>      Theodore Modis[29][30] and Jonathan      Huebner[31] argue that the      rate of technological innovation has not only ceased to rise,      but is actually now declining. Evidence for this decline is      that the rise in computer clock rates is slowing, even while Moore's      prediction of exponentially increasing circuit density      continues to hold. This is due to excessive heat build-up      from the chip, which cannot be dissipated quickly enough to      prevent the chip from melting when operating at higher      speeds. Advancements in speed may be possible in the future      by virtue of more power-efficient CPU designs and multi-cell      processors.[32] While      Kurzweil used Modis' resources, and Modis' work was around      accelerating change, Modis distanced himself from Kurzweil's      thesis of a \"technological singularity\", claiming that it      lacks scientific rigor.[30]    <\/p>\n<p>      Others[who?]      propose that other \"singularities\" can be found through      analysis of trends in world population, world gross domestic product, and      other indices. Andrey Korotayev and others argue that      historical hyperbolic growth curves can be      attributed to feedback loops that      ceased to affect global trends in the 1970s, and thus      hyperbolic growth should not be expected in the      future.[33][improper      synthesis?]    <\/p>\n<p>      In a detailed empirical accounting, The Progress of      Computing, William Nordhaus argued that, prior to      1940, computers followed the much slower growth of a      traditional industrial economy, thus rejecting extrapolations      of Moore's law to 19th-century computers.[34]    <\/p>\n<p>      In a 2007 paper, Schmidhuber stated that the frequency of      subjectively \"notable events\" appears to be approaching a      21st-century singularity, but cautioned readers to take such      plots of subjective events with a grain of salt: perhaps      differences in memory of recent and distant events could      create an illusion of accelerating change where none      exists.[35]    <\/p>\n<p>      Paul Allen      argues the opposite of accelerating returns, the complexity      brake;[14] the      more progress science makes towards understanding      intelligence, the more difficult it becomes to make      additional progress. A study of the number of patents shows      that human creativity does not show accelerating returns, but      in fact, as suggested by Joseph Tainter in his The      Collapse of Complex Societies,[36] a law of diminishing returns. The number of      patents per thousand peaked in the period from 1850 to 1900,      and has been declining since.[31] The growth of      complexity eventually becomes self-limiting, and leads to a      widespread \"general systems collapse\".    <\/p>\n<p>      Jaron      Lanier refutes the idea that the Singularity is      inevitable. He states: \"I do not think the technology is      creating itself. It's not an autonomous process.\"[37] He goes on to assert:      \"The reason to believe in human agency over technological      determinism is that you can then have an economy where people      earn their own way and invent their own lives. If you      structure a society on not emphasizing individual      human agency, it's the same thing operationally as denying      people clout, dignity, and self-determination ... to embrace      [the idea of the Singularity] would be a celebration of bad      data and bad politics.\"[37]    <\/p>\n<p>      Economist      Robert J. Gordon, in The Rise and      Fall of American Growth: The U.S. Standard of Living Since      the Civil War (2016), points out that measured economic      growth has slowed around 1970 and slowed even further since      the financial      crisis of 2008, and argues that the economic data show no      trace of a coming Singularity as imagined by mathematician      I.J. Good.[38]    <\/p>\n<p>      In addition to general criticisms of the singularity concept,      several critics have raised issues with Kurzweil's iconic      chart. One line of criticism is that a log-log chart of this nature is inherently      biased toward a straight-line result. Others identify      selection bias in the points that Kurzweil chooses to use.      For example, biologist PZ Myers points out that many of the early      evolutionary \"events\" were picked arbitrarily.[39] Kurzweil has rebutted      this by charting evolutionary events from 15 neutral sources,      and showing that they fit a straight line on a log-log chart.      The      Economist mocked the concept with a graph      extrapolating that the number of blades on a razor, which has      increased over the years from one to as many as five, will      increase ever-faster to infinity.[40]    <\/p>\n<p>      The term \"technological singularity\" reflects the idea that      such change may happen suddenly, and that it is difficult to      predict how the resulting new world would operate.[41][42] It is      unclear whether an intelligence explosion of this kind would      be beneficial or harmful, or even an existential threat,[43][44] as the issue has      not been dealt with by most artificial general      intelligence researchers, although the topic of friendly artificial      intelligence is investigated by the Future of Humanity      Institute and the Machine      Intelligence Research Institute.[41]    <\/p>\n<p>      While the technological singularity is usually seen as a      sudden event, some scholars argue the current speed of change      already fits this description. In addition, some argue that      we are already in the midst of a major evolutionary      transition that merges technology, biology, and society.      Digital technology has infiltrated the fabric of human      society to a degree of indisputable and often life-sustaining      dependence. A 2016 article in Trends in Ecology      & Evolution argues that \"humans already embrace      fusions of biology and technology. We spend most of our      waking time communicating through digitally mediated      channels... we trust artificial intelligence      with our lives through antilock braking in cars and      autopilots in      planes... With one in three marriages in America beginning      online, digital algorithms are also taking a role in human      pair bonding and reproduction\". The article argues that from      the perspective of the evolution, several previous Major Transitions in      Evolution have transformed life through innovations in      information storage and replication (RNA, DNA,      multicellularity, and culture and language). In the      current stage of life's evolution, the carbon-based biosphere      has generated a cognitive system      (humans) capable of creating technology that will result in a      comparable evolutionary      transition. The digital information created by humans has      reached a similar magnitude to biological information in the      biosphere. Since the 1980s, \"the quantity of digital      information stored has doubled about every 2.5 years,      reaching about 5 zettabytes in 2014 (5x10^21 bytes). In      biological terms, there are 7.2 billion humans on the planet,      each having a genome of 6.2 billion nucleotides. Since one      byte can encode four nucleotide pairs, the individual genomes      of every human on the planet could be encoded by      approximately 1x10^19 bytes. The digital realm stored 500      times more information than this in 2014 (...see Figure)...      The total amount of DNA contained in all of the cells on      Earth is estimated to be about 5.3x10^37 base pairs,      equivalent to 1.325x10^37 bytes of information. If growth in      digital storage continues at its current rate of 3038%      compound annual growth per year,[18] it will rival      the total information content contained in all of the DNA in      all of the cells on Earth in about 110 years. This would      represent a doubling of the amount of information stored in      the biosphere across a total time period of just 150      years\".[45]    <\/p>\n<p>      In February 2009, under the auspices of the Association      for the Advancement of Artificial Intelligence (AAAI),      Eric      Horvitz chaired a meeting of leading computer scientists,      artificial intelligence researchers and roboticists at      Asilomar in Pacific Grove, California. The goal was to      discuss the potential impact of the hypothetical possibility      that robots could become self-sufficient and able to make      their own decisions. They discussed the extent to which      computers and robots might be able to acquire autonomy, and to what      degree they could use such abilities to pose threats or      hazards.[46]    <\/p>\n<p>      Some machines have acquired various forms of semi-autonomy,      including the ability to locate their own power sources and      choose targets to attack with weapons. Also, some computer viruses can evade elimination      and, according to scientists in attendance, could therefore      be said to have reached a \"cockroach\" stage of machine      intelligence. The conference attendees noted that      self-awareness as depicted in science-fiction is probably      unlikely, but that other potential hazards and pitfalls      exist.[46]    <\/p>\n<p>      Some experts and academics have questioned the use of      robots for military      combat, especially when such robots are given some degree of      autonomous functions.[47][improper      synthesis?]    <\/p>\n<p>      In his 2005 book, The Singularity is Near, Kurzweil      suggests that medical advances would allow people to protect      their bodies from the effects of aging, making the life      expectancy limitless. Kurzweil argues that the technological      advances in medicine would allow us to continuously repair      and replace defective components in our bodies, prolonging      life to an undetermined age.[48] Kurzweil      further buttresses his argument by discussing current      bio-engineering advances. Kurzweil suggests somatic gene therapy; after      synthetic viruses with specific genetic information, the next      step would be to apply this technology to gene therapy,      replacing human DNA with synthesized genes.[49]    <\/p>\n<p>      Beyond merely extending the operational life of the physical      body, Jaron      Lanier argues for a form of immortality called \"Digital      Ascension\" that involves \"people dying in the flesh and being      uploaded into a computer and remaining conscious\".[50]Singularitarianism has also been      likened to a religion by John      Horgan.[51]    <\/p>\n<p>      In his obituary for John von Neumann, Ulam recalled a      conversation with von Neumann about the \"ever accelerating      progress of technology and changes in the mode of human life,      which gives the appearance of approaching some essential      singularity in the history of the race beyond which human      affairs, as we know them, could not continue.\"[21]    <\/p>\n<p>      In 1965, Good wrote his essay postulating an \"intelligence      explosion\" of recursive self-improvement of a machine      intelligence. In 1985, in \"The Time Scale of Artificial      Intelligence\", artificial intelligence researcher Ray      Solomonoff articulated mathematically the related notion      of what he called an \"infinity point\": if a research      community of human-level self-improving AIs take four years      to double their own speed, then two years, then one year and      so on, their capabilities increase infinitely in finite      time.[3][52]    <\/p>\n<p>      In 1983, Vinge greatly popularized Good's intelligence      explosion in a number of writings, first addressing the topic      in print in the January 1983 issue of Omni      magazine. In this op-ed piece, Vinge seems to have been the      first to use the term \"singularity\" in a way that was      specifically tied to the creation of intelligent      machines:[53][54] writing    <\/p>\n<p>        We will soon create intelligences greater than our own.        When this happens, human history will have reached a kind        of singularity, an intellectual transition as impenetrable        as the knotted space-time at the center of a black hole,        and the world will pass far beyond our understanding. This        singularity, I believe, already haunts a number of        science-fiction writers. It makes realistic extrapolation        to an interstellar future impossible. To write a story set        more than a century hence, one needs a nuclear war in        between ... so that the world remains intelligible.      <\/p>\n<p>      Vinge's 1993 article \"The Coming Technological Singularity:      How to Survive in the Post-Human Era\",[4] spread widely on the      internet and helped to popularize the idea.[55] This article contains      the statement, \"Within thirty years, we will have the      technological means to create superhuman intelligence.      Shortly after, the human era will be ended.\" Vinge argues      that science-fiction authors cannot write realistic      post-singularity characters who surpass the human intellect,      as the thoughts of such an intellect would be beyond the      ability of humans to express.[4]    <\/p>\n<p>      In 2000, Bill      Joy, a prominent technologist and a co-founder of      Sun      Microsystems, voiced concern over the potential dangers      of the singularity.[24]    <\/p>\n<p>      In 2005, Kurzweil published The Singularity is Near.      Kurzweil's publicity campaign included an appearance on      The      Daily Show with Jon Stewart.[56]    <\/p>\n<p>      In 2007, Eliezer Yudkowsky suggested that many      of the varied definitions that have been assigned to      \"singularity\" are mutually incompatible rather than mutually      supporting.[9][57] For example, Kurzweil      extrapolates current technological trajectories past the      arrival of self-improving AI or superhuman intelligence,      which Yudkowsky argues represents a tension with both I. J.      Good's proposed discontinuous upswing in intelligence and      Vinge's thesis on unpredictability.[9]    <\/p>\n<p>      In 2009, Kurzweil and X-Prize founder Peter      Diamandis announced the establishment of Singularity University, a      nonaccredited private institute whose stated mission is \"to      educate, inspire and empower leaders to apply exponential      technologies to address humanity's grand challenges.\"[58] Funded by      Google, Autodesk, ePlanet Ventures, and a group of      technology      industry leaders, Singularity University is based at      NASA's Ames Research Center in Mountain View, California. The      not-for-profit organization runs an annual ten-week graduate      program during the northern-hemisphere summer that covers ten      different technology and allied tracks, and a series of      executive programs throughout the year.    <\/p>\n<p>      In 2007, the joint Economic Committee of the United States Congress released      a report about the future of nanotechnology. It predicts      significant technological and political changes in the      mid-term future, including possible technological      singularity.[59][60][61]    <\/p>\n<p>      The president of the United States Barack Obama      spoke about singularity in his interview to Wired      in 2016:[62]    <\/p>\n<p>        One thing that we havent talked about too much, and I just        want to go back to, is we really have to think through the        economic implications. Because most people arent spending        a lot of time right now worrying about singularitythey are        worrying about Well, is my job going to be replaced by a        machine?      <\/p>\n<p>      The singularity is referenced in innumerable science-fiction      works. In Greg      Bear's sci-fi novel Blood      Music (1983), a singularity occurs in a matter of      hours.[4]David Brin's      Lungfish (1987) proposes that AI be given humanoid      bodies and raised as our children and taught the same way we      were.[63]      In William Gibson's 1984 novel Neuromancer,      artificial intelligences capable of improving their own      programs are strictly regulated by special \"Turing police\" to      ensure they never exceed a certain level of intelligence, and      the plot centers on the efforts of one such AI to circumvent      their control.[63][64] In Greg Benford's 1998 Me\/Days, it is      legally required that an AI's memory be erased after every      job.[63]    <\/p>\n<p>      The entire plot of Wally Pfister's Transcendence centers on      an unfolding singularity scenario.[65] The      2013 science fiction film Her follows a man's romantic      relationship with a highly intelligent AI, who eventually      learns how to improve herself and creates an intelligence      explosion. The 1982 film Blade Runner, and the 2015 film      Ex Machina, are two mildly      dystopian visions about the impact of artificial general      intelligence. Unlike Blade Runner, Her and      Ex Machina both attempt to present \"plausible\"      near-future scenarios that are intended to strike the      audience as \"not just possible, but highly probable\".[66]    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Follow this link: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/en.m.wikipedia.org\/wiki\/Technological_singularity\" title=\"Technological singularity - Wikipedia\">Technological singularity - Wikipedia<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. John von Neumann first uses the term \"singularity\" in the context of technological progress causing accelerating change: The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.[6] Subsequent authors have echoed this view point.[2][3]I <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/technological-singularity-wikipedia\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187807],"tags":[],"class_list":["post-174737","post","type-post","status-publish","format-standard","hentry","category-singularity"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/174737"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=174737"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/174737\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=174737"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=174737"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=174737"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}