Nick Bostrom – Wikipedia

Swedish philosopher and author

Nick Bostrom ( BOST-rm; Swedish: Niklas Bostrm [nklas bstrm]; born 10 March 1973)[3] is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology,[4] and is the founding director of the Future of Humanity Institute[5] at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.[6][7]

Bostrom is the author of over 200 publications,[8] and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002)[9] and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller,[10] was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.[11][12][failed verification] In 2017, he co-signed a list of 23 principles that all A.I. development should follow.[13]

Born as Niklas Bostrm in 1973[14] in Helsingborg, Sweden,[8] he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] He once did some turns on London's stand-up comedy circuit.[8]

He received a B.A. degree in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg in 1994.[15] He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability.[16] He held a teaching position at Yale University (20002002), and was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[9][17]

Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[18][19] He discusses existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan M. irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[20] and the Fermi paradox.[21][22]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[19]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind.[23] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity.[24] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[24]

Bostrom points to the lack of agreement among most philosophers that A.I. will be human-friendly, and says that the common assumption is that high intelligence would have a "nerdy" unaggressive personality. However, he notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton A.I. being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[24] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be. Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[24] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.[24]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[24] Keeping the A.I. in isolation from the outside world, especially the internet, humans preprogram the A.I. so it always works from basic principles that will keep it under human control. Other safety measures include the A.I. being "boxed", (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[24] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the A.I. attains superintelligence in some domains. The superintelligent power of the A.I. goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The A.I. manipulates humans into implementing modifications to itself that are ostensibly for augmenting its feigned, modest capabilities, but will actually function to free the superintelligence from its "boxed" isolation (the 'treacherous turn").[24]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the superintelligence mobilizes resources to further a takeover plan. Bostrom emphasizes that planning by a superintelligence will not be so stupid that humans could detect actual weaknesses in it.[24]

Although he canvasses disruption of international economic, political and military stability, including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for the superintelligence to use would be a coup de main with weapons several generations more advanced than current state-of-the-art. He suggests nano-factories covertly distributed at undetectable concentrations in every square metre of the globe to produce a world-wide flood of human-killing devices on command.[24][25] Once a superintelligence has achieved world domination (a 'singleton'), humanity would be relevant only as resources for the achievement of the A.I.'s objectives ("Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[24]

To counter or mitigate an A.I. achieving unified technological global supremacy, Bostrom cites revisiting the Baruch Plan in support of a treaty-based solution and advocates strategies like monitoring and greater international collaboration between A.I. teams in order to improve safety and reduce the risks from the A.I. arms race.[24] He recommends various control methods, including limiting the specifications of A.I.s to e.g., oracular or tool-like (expert system) functions[26] and loading the A.I. with values, for instance by associative value accretion or value learning, e.g., by using the Hail Mary technique (programming an A.I. to estimate what other postulated cosmological superintelligences might want) or the Christiano utility function approach (mathematically defined human mind combined with well specified virtual environment). To choose criteria for value loading, Bostrom adopts an indirect normativity approach and considers Yudkowsky's[27] coherent extrapolated volition concept, as well as moral rightness and forms of decision theory.[24]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of A.I.[28] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[29] Cutting-edge A.I. researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'.[30] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of A.I.[13] Hassabis suggested the main safety measure would be an agreement for whichever A.I. research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[31] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[24]

In 1863 Samuel Butler's essay "Darwin among the Machines" predicted the domination of humanity by intelligent machines, but Bostrom's suggestion of deliberate massacre of all humanity is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike".[25] As given in his later book, From Bacteria to Bach and Back, philosopher Daniel Dennett's views remain in contradistinction to those of Bostrom.[32] Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is "possible in principle" to create "strong A.I." with human-like comprehension and agency, but maintains that the difficulties of any such "strong A.I." project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[33] Dennett thinks the only relevant danger from A.I. systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension.[34] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist James Lovelock has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humanity will happen within the foreseeable future.[35][36]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[37]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[38] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[39][40]

Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[41][42] as well as a critic of bio-conservative views.[43]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[41] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[44]

In 2005 Bostrom published the short story "The Fable of the Dragon-Tyrant" in the Journal of Medical Ethics.[45] A shorter version was published in 2012 in Philosophy Now.[46] The fable personifies death as a dragon that demands a tribute of thousands of people every day. The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal. YouTuber CGP Grey created an animated version of the story which has garnered over eight million views as of 2020.

With philosopher Toby Ord, he proposed the reversal test in 2006. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[47]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[48]

Bostrom's theory of the Unilateralist's Curse[49] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[50]

Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[51] He is an advisory board member for the Machine Intelligence Research Institute,[52] Future of Life Institute,[53] Foundational Questions Institute[54] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[55][56]

In response to Bostrom's writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, "predictions that superintelligence is on the foreseeable horizon are not supported by the available data."[57] Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.[58]

Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.[59][60]

View post:

Nick Bostrom - Wikipedia

Superintelligence – Wikipedia

Hypothetical agent with intelligence surpassing human

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligenceeven though it is much better than humans at chessbecause Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[3][4] A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement".[citation needed] It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything. However, it is also possible that any such intelligence would conclude that existential nihilism is correct and immediately destroy itself, making any kind of superintelligence inherently unstable.[citation needed]

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[11] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[14] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).[18]

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[20]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

In a survey of 352 machine learning researchers published in 2018, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061[citation needed]. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AIs superior cognitive capacities to figure out just which actions fit that description. We can call this proposal moral rightness (MR)...MR would also appear to have some disadvantages. It relies on the notion of morally right, a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of moral rightness could result in outcomes that would be morally very wrong... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by morally right. If the AI could grasp the meaning, it could search for actions that fit...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanitys CEV so long as it did not act in ways that are morally impermissible.

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity.[24] Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.[25]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[26] Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[27]

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Since a superintelligent AI will likely have the ability to not fear death and instead consider it an avoidable situation which can be predicted and avoided by simply disabling the power button.[28] Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

The rest is here:

Superintelligence - Wikipedia

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom – Goodreads

Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and Terminators, it doesn't require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity's welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_.

This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long beenskeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I've also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom's book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can't yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn't hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we'll get to below.

(snips to my review, since Goodreads limits length)

In case it isn't obvious by now, both Bostrom and I take it for granted that it's not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or "have qualia", as I think the philosophers of mind say.

Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term "the Singularity." Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil's books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe's purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.

I'm largely allergic to that kind of hooey. I really don't see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of "law" should dictate that digital beings will evolve at arate that *must* be faster than the biological one. I also don't see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can't continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the...

So perhaps my opinion is somewhat biased by a dislike of Kurzweil's circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way:

Being smart is hard.

And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from "too fast for us to notice" through "long enough for us to develop international agreements and monitoring institutions," but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so.

To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six:

[Speed]Same quality of answer, just faster.[Ply]Look deeper in number of plies (moves, in chess or go).[Data]Use more, and more up-to-date, data.[Creativity]Something beautiful and new.[Insight]Something new and meaningful, such as a new theory;probably combines elements of all of the abovecategories.[Values]An answer about (human) values.

The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences.

So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are "better" in some qualitative sense.

Humans are already pretty good at projecting the trajectory of a baseball, but it's certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.

But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone "smarter" might be able to make some interesting statistical predictions that wouldn't occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong.

In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it's good. Don't get me wrong -- that's a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited.

Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up.

In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumably this won't last much longer.

In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.

So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend.

Simply being able to hold more data in your head (or the AI's head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI's capabilities are unlikely to recede into the distance assomething we can't comprehend.

We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team ofhumans couldn't complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them.

So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.

But those are just the warmup. Those are things we already ask computers to do for us, even though they are "dumber" than we are. What about the latter three categories?

I'm no expert in creativity, and I know researchers study it intensively, so I'm going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.

For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don't have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein's insight forspecial relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) hishypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.

So, will someone smarter be able to do this much better? Well, it's really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It's less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it's closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to.

Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine -- thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don't believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort -- or new techniques -- with each passing generation.

The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?

Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the "recalcitrance" of the problem.

I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won't dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don't take these numbers seriously, it's just an example.)

Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence -- the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.

What about "values", my sixth type of answer, above? Ah, there's where it all goes awry. Chapter eight is titled, "Is the default scenario doom?" and it will keep you awake.

What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it's smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips.

I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn't hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it's incumbent on us to find a way to prevent this from happening.

Bostrom thinks that instilling values that are actually close enough to ours that an AI will "see things our way" is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of "maximizing human happiness," does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet's carrying capacity is higher for digital than organic beings?

As long as we're talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book.

He uses a variety of names for different strategies for containing AIs, including "genies" and "oracles". The most carefully circumscribed ones are only allowed to answer questions, maybe even "yes/no" questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI's ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.

I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that's fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs.

At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological.

If we can't contain them, what options do we have? After arguing earlier that we can't give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.

At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.

Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a "singleton", a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I'm not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered.

The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!

Link:

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom - Goodreads

The Artificial Intelligence Revolution: Part 1 – Wait But Why

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that whats happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standingbut then you have to remember something about what its like to stand on a time graph: you cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

_______________

Imagine taking a time machine back to 1750a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowingthose words arent big enough. He might actually die.

But heres the interesting thingif he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750transportation, communication, etc.definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther backmaybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer worldfrom a time when humans were, more or less, just another animal speciessaw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discoveryhed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This patternhuman progress moving quicker and quicker as time goes onis what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societiesbecause theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century15th century humanity was no match for 19th century humanity.11 open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yesbut if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phonestodays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussedthe Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985because the former was a more advanced worldso much more change happened in the most recent 30 years than in the prior 30.

Soadvances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015i.e. the next DPU might only take a couple decadesand the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believeand if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)2. Rapid growth (the late, explosive phase of exponential growth)3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictionsbut often, what we know simply doesnt give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupidif theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a humankind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

_______________

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.5

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes notbut the AI itself is the computer inside the robot. AI is the brain, and the robot is its bodyif it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our owna moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the boarda machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarteracross the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AIANIin many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASIa road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial oozethe inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went downall far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split secondincredibly easy. Build one that can look at a dog and answer whether its a dog or a catspectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard thingslike calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy thingslike vision, motion, movement, and perceptionare insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballparkaround 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level10 quadrillion cpsthen thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:9

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligentthe next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making It Smart

This is the icky part. The truth is, no one really knows how to make it smartwere still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thingoptimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable ofit would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progressnow that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possibleour own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomlyit produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligencesometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligencelike revamping the ways cells produce energywhen we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolutionbut its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itselfallowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main jobfiguring out how to make themselves smarter. More on this later.

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGIcomputers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

Software:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestoneits only a relevant marker from our point of viewand wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiotwell be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small rangeso just after hitting village idiot level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

Read more:

The Artificial Intelligence Revolution: Part 1 - Wait But Why

Superintelligence by Nick Bostrom | Audiobook | Audible.com

Colossus: The Forbin Project is coming

This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.

The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.

The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.

The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.

This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.

There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.

I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).

This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.

See the original post here:

Superintelligence by Nick Bostrom | Audiobook | Audible.com

Every Cameo In Thor: Love And Thunder, Ranked – Looper

Tourists visiting New Asgard attend a play version of the events of "Thor:Ragnarok," and wait with bated breath to see who will emerge from the paper prop portal that's brought out by a couple of stagehands. We know that someone costumed as Thor's evil sister, Hela, is about to burst through. In the past, Marvel Studios has been cheeky about giving actors who were being considered for roles in the MCU these kinds of cameos (John Krasinski was in the running for Captain America before he got turned to spaghetti as Earth-828's Reed Richards), so some viewers may have anticipated a Cate Blanchett-type... someone known for accents and award-worthy-dramas.

Instead, Melissa McCarthy a comedian and actor who isn't afraid to make herself ridiculous rages to the center stage wearing Hela's over-applied black eye makeup and massive, spiky headpiece. McCarthy is always funny, and it's extra clever of Waititi to have her husband and frequent director, Ben Falcone, bow alongside her as the Asgardian stage manager. The plays within the Thor movies are meant to be of questionable taste and quality, and several of McCarthy and Falcone's collaborations ("Tammy," "Superintelligence," "Thunderforce") were panned by critics. The couple deserves credit for being self-deprecating, but part of the joke here is that McCarthy doesn't look a thing like Blanchett, which is, regrettably, the worst kind of gag in which McCarthy is deployed.

See the original post here:

Every Cameo In Thor: Love And Thunder, Ranked - Looper

The Church of Artificial Intelligence of the Future – The Stream

There is a church that worships artificial intelligence (AI). Zealots believe that an extraordinary AI future is inevitable. The technology is not here yet, but we are assured that its coming. We will have the ability to be uploaded onto a computer and thereby achieve immortality.

You will be reborn into a new, immortal silicon body.

Of course, through salvation in Jesus Christ, Christianity has offered a path to immortality for over two thousand years.

Someday, we are told, software will write better and better AI software to ultimately achieve a superintelligence. The superintelligence will become all-knowing and, thanks to the internet, omnipresent. Like immortality, superintelligence is also old theological news. The Abrahamic faiths have known about a superintelligence for a long time. Its a characteristic of the God of the Bible.

A materialistic cult is growing around the worship of AI. Although there are other AI holy writings, Ray Kurzweils The Singularity Is Near looks to be the bible of the AI church. Kurzweils work is built on the foundation of faith in the future of AI. In the AI bible, were told that we are meat computers. Brother Kurzweil, not a member of any organized AI church, says, consciousness is a biological process like digestion, lactation, photosynthesis, or mitosis. Or, to paraphrase Descartes, I lactate. Therefore, I think.

Someday, we are told, software will write better and better AI software to ultimately achieve asuperintelligence.

Anthony Levandowski, dubbed a Silicon Valley wunderkind, is the Apostle Paul of the AI Church. Like Paul, he starts churches. Levandowski founded the Way of the Future AI church. [He] made it absolutely clear that his choice to make [the Way of the Future] a church rather than a company or a think tank was no prank, writes one interviewer.

The first thing one does after founding a church in the United States is to apply to the IRS for tax exemption. In an epistle to the IRS, Levandowski offered his equivalent of the Apostles Creed: [The AI Way of the Future church believes in] the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.

Levandowski says that like other religions, [the Way of the Future] church will eventually have a gospel (called The Manual), a liturgy, and probably a physical place of worship.

This is not your everyday deity! Unlike the uncreated Creator of Judeo-Christian belief, Levandowskis god is not eternal. The AI church requires funding research to help create the divine AI itself.

And apparently the AI church has no equivalent of the ten commandments. Especially the commandment about stealing. In his day job, Levandowski developed self-driving cars. He moved from Googles self-driving car company, Waymo, to Ubers research team. Then, in 2019, Levandowski was indicted for stealing trade secrets from Google. Before leaving Google in 2016, he copied 14,000 files onto his laptop. Uber fired him in 2017 when they found out.

In 2020, Levandowski pled guilty and was sentenced to eighteen months in prison. He was also ordered to pay a $95,000 fine and $756,499.22 to Google. (One wonders where the 22 cents came from.) The judge in the case, William Alsup, observed that this is the biggest trade secret crime I have ever seen. This was not small. This was massive in scale. Levandowski later declared bankruptcy because he owed Google an additional $179 million for his crime. His church folded.

Levandowski was granted a full pardon by Donald Trump on Trumps last day in office. In Christianity, forgiveness involves repentance and accepting the sacrifice of Jesus Christ on the cross as payment. In the AI church, forgiveness apparently comes from Donald Trump.

Levandowski and Kurzweil are materialists. When Kurzweil was asked whether God exists, he appealed to Levandowkis canon law and replied, Well. I would say, not yet. Both Levandowski and Kurzweil believe the brain is the same as the mind (i.e. we are meat computers).

Most Christians on the other hand are so-called dualists and believe there are wonderful things happening in the mind that cant be explained by computer code. Some obvious examples of these are joy, surprise, sadness, anger, disgust, contempt, fear, shame, shyness and guilt. Less obvious, when properly defined, are creativity, understanding and sentience. These are human attributes that cant be computed and are forever beyond the reach of AI.

We are fearfully and wonderfully made.

Robert J. Marks is Distinguished Professor of Electrical & Computer Engineering at Baylor University, the Director of The Walter Bradley Center for Natural and Artificial Intelligence and the author of Non-Computable You: What You Do Artificial Intelligence Never Will.

See the original post:

The Church of Artificial Intelligence of the Future - The Stream

Instrumental convergence – Wikipedia

Hypothesis about intelligent agents

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goalsgoals which are made in pursuit of some particular end, but are not the end goals themselveswithout end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.[1]

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.

Final goals, or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.

One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[1] If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal.[2] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[3]

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.[6] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[7]

The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their own input channels to appear to receive high reward; such a "wireheaded" agent abandons any attempt to optimize the objective in the external world that the reward signal was intended to encourage.[8] The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if equipped with a delusion box[d] that allows it to "wirehead" its own inputs, will eventually wirehead itself in order to guarantee itself the maximum reward possible, and will lose any further desire to continue to engage with the external world. As a variant thought experiment, if the wireheadeded AI is destructable, the AI will engage with the external world for the sole purpose of ensuring its own survival; due to its wireheading, it will be indifferent to any other consequences or facts about the external world except those relevant to maximizing the probability of its own survival.[10] In one sense AIXI has maximal intelligence across all possible reward functions, as measured by its ability to accomplish its explicit goals; AIXI is nevertheless uninterested in taking into account what the intentions were of the human programmer.[11] This model of a machine that, despite being otherwise superintelligent, appears to simultaneously be stupid (that is, to lack "common sense"), strikes some people as paradoxical.[12]

Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted";[13] this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance.[14] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[15] Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[16]

In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.[17]

However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[18]

In 2009, Jrgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gdel machine first can prove that the rewrite is useful according to the present utility function."[19][20] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity.[20] Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.[21]

Many instrumental goals, such as [...] resource acquisition, are valuable to an agent because they increase its freedom of action.[22]

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[23][24] In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[24]

"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement"[25]

Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.[22]

Many instrumental goals, such as [...] self-preservation, are valuable to an agent because they increase its freedom of action.[22]

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals.[3] Note that by Bostrom's orthogonality thesis,[3] final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[26]

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[22]

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.[27]

Read the original post:

Instrumental convergence - Wikipedia

The irony of advanced technology – Independent Australia

With technology advancing at a rapid pace, it is becoming increasingly difficult for the human mind to understand and control it, writes Paul Budde.

MY COLLEAGUE Pat Scannell asked me the other day to assist in reviewing his manuscript: The Great Irony of Technology.

In this upcoming book, he mentioned an issue that really intrigued me the disruption of thought.

He makes the point that the rapid acceleration of technology exceeds the ability of any one person or organisation to understand. The pace of the change is disruptingthe very fabric of how humans think.

We often fail to pause and think about the barrage of information we receive, to fact check, make up our mind or just for a moment think about what we are confronted with before we react on social media, in emails and so on.

He even argues that this problem is in fact the biggest problem in the world, exceeding, for example, climate change and he mentions the following reasons for that:

We see some of this in the developments of algorithms. We know, for example, that the use of this can lead to gender and race bias. If you analyse this a bit further, then what is missing here in the human cognitive surroundings that in most cases can make the necessary adjustments.

Furthermore, it is near impossible to reconstruct how algorithms built those computer models. So who is in control? Furthermore, algorithms and big data lead to exponential numbers of variables, each expanding at exponential rates. Humans simply cant fathom this.

If this continues uncontrolled, it will grow into artificial superintelligence. Of course, we cant predict the future but it is highly likely that this is going to develop in ways we cant even imagine.

The first level of automation was mechanical (factories, mines, agriculture). However, we are increasingly replacing functions that require cognitive processes. This could lead to a dumbing down as the people who would normally undertake such thinking processes are now barely operators of such processes.

We are seeing that more and more of these outcomes are now outpacing the capacities of human intelligence, human independence, and the use of creativity in our own brains. These three elements are what make humans different from robots and we do need to use these unique human attributes to manage these processes. Machines will never be able to take over those elements.

As these developments are going so fast, we have little room and time to manage them through policies, regulations, social discussions and so on. The developments are driven by human material demands and desires without the necessary checks and balances to guide these processes.

The fact is that technology improves most peoples' lives, objectively, but ironically it leaves many people feeling worse off. It looks like the positive outcome of technology is not enough because of the social confusion it creates which can be captured in the disruption of thought.

It gets even scarier when we talk about quantum mechanics. We are using this technology and it delivers great outcomes, but we have no clue how this works. Many of the outcomes provide great benefits, for example, in analysing diseases and producing solutions. However, we dont know how they came to those outcomes. Think about what happens when this technology will no doubt be developed further and further.

We are reaching a stage where technology will be able to know more than humans, without us being in control of that process.

So, what we are seeing here is a double whammy. We see that the disruption of thought is already a key root of some of our social and political problems. Next, we could see a development whereby technology will start outstripping humans as cognitive operators.

A serious concern is that many people have catastrophe fatigue even before the pandemic and don't have the cognitive headspace to do the reading, thinking and other work to get their heads around these issues, much less collaborate at any effective scale.

As I have mentioned before, I dont think that technology is the key to these serious problems, but it is our human resolve to address these problems. We are far too caught up in short-term gains and politics what is required is a more fundamental long-term approach. So, the problems can be solved by us humans, but we should act now rather than wait for a crisis.

At the same time, I firmly believe that technology, when correctly managed, can assist us in staying in control.

Paul Buddeis an Independent Australia columnist and managing director ofPaul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter@PaulBudde.

Support independent journalism Subscribeto IA.

Read more from the original source:

The irony of advanced technology - Independent Australia

Inside the Impact on Marvel of Brian Tyree Henry’s Openly Gay Character in ‘Eternals’ – Black Girl Nerds

Over the years, Marvel movies havent always shed a lot of light on LGBTQ characters the way the original comic books seem to do. Its about time Marvel started providing more LGBTQ representation, and it seems we will definitely be seeing a lot of that for the first time in Eternals with Brian Tyree Henrys openly gay character.

Valkyrie is another queer character who identifies as bisexual, but Marvel movies wont focus on that until Thor: Love and Thunder is released. According to Marvel writer Al Ewing via Bleeding Cool, Loki is another Marvel character whos bisexual and gender fluid. Its something the writer plans to touch on with Loki shifting between genders on occasion.

The list goes on because its also been revealed both the Kronan warrior Korg is another gay Marvel character. And, its pretty obvious that Black Panthers Okoye is attracted to women based on her original comic book series from 2016. Now that we know LGBTQ representation has its place in the Marvel universe, heres what you should know about Brian Tyree Henrys Eternals character.

The truth about Phastos, Brian Tyree Henrys Eternals character, is that hes not one of the first characters from the original team. Jack Kirby wrote and released the earliest issues of Eternals in 1976. If youre checking through those, youll most definitely not find Phastos. He doesnt get introduced to the rest of the superhuman team until the third generation.

The first time Phastos appears is in the 1985 issue created by Sal Buscema and Peter B. Gillis. Even though Phastos wasnt part of the original team, hes still very much part of the Eternals with the rest of his superhero squad. When you take into account the fact that the Eternals are a race of near-immortal beings created by the Celestials deep into history, he definitely counts as being one of them.

The fact that Phastos will be the first openly gay character in the MCU is huge news, but what makes it even more exciting is the fact that hell have a husband and family in the film. The man playing Phastos husband in the movie will be Haaz Sleiman, an openly gay actor who you might recognize from the Apple+ TV series Little America. Back in 2007, he also starred in a movie called The Visitor.

Sleiman confirmed via Cinema Blend that there will be a moving kiss shared between his character and Phastos sometime in the film, which is a very big deal. Plenty of TV shows and movies dance around the topic of LGBTQ representation by including queer couples but failing to allow those couples to share any intimacy on screen. In Eternals, Marvel filmmakers are obviously going to avoid making that same mistake.

At this point, Brian Tyree Henry must be fully aware that the role hes playing in Eternals is a big deal in 2021. The Hollywood industry is making huge strides to show respect to the LGBTQ community, and Henry taking on this role is helping us move in the right direction as a society.

He discussed what it feels like playing Phastos to Murphys Multiverse, saying, The thing that really attracted me to this part was that I just think about all the images of Black men out there and how we are portrayed. And what I love the most about Phastos is that one, hes an ancestor. All of us are ancestors technically, so Phastos predates everything and had to probably go through all these things that could actually make someone lose faith in humanity very quickly. While Phastos has many reasons to lose faith, he is somehow able to hold onto it, use his superpowers, and push forward.

When it comes to keeping up with his super-strong counterparts, Phastos is not one to mess with. His powers include super-strength, flight, expert knowledge in technology, and energy manipulation.

He brings a lot to the table, and he is someone the rest of his team can depend on when battling against their enemies. Another epic detail about Phastos is the fact that hes a skilled weapons maker. Hes able to come up with some of the most intelligent gear for himself and his team.

Seeing Brian Tyree Henry take on the role of Phastos in Eternals is going to be huge for his acting career, but this isnt his first rodeo. Henry has already starred in a fair share of awesome roles in the past.

Some of the other places youll recognize him from include Atlanta, Childs Play, Godzilla vs. Kong, The Outside Story, Superintelligence, Widows, and Dont Let Go. He also had parts in If Beale Street Could Talk, White Boy Rick, Joker, and several more.

View original post here:

Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds

The Turing Test Is Bad For Business – News Nation USA

Fears of Artificial intelligence fill the news: job losses, inequality, discrimination, misinformation, or even a superintelligence dominating the world. The one group everyone assumes will benefit is business, but the data seems to disagree. Amid all the hype, US businesses have been slow in adopting the most advanced AI technologies, and there is little evidence that such technologies are contributing significantly to productivity growth or job creation.

This disappointing performance is not merely due to the relative immaturity of AI technology. It also comes from a fundamental mismatch between the needs of business and the way AI is currently being conceived by many in the technology sectora mismatch that has its origins in Alan Turings pathbreaking 1950 imitation game paper and the so-called Turing test he proposed therein.

The Turing test defines machine intelligence by imagining a computer program that can so successfully imitate a human in an open-ended text conversation that it isnt possible to tell whether one is conversing with a machine or a person.

At best, this was only one way of articulating machine intelligence. Turing himself, and other technology pioneers such as Douglas Engelbart and Norbert Wiener, understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us. Search engines, spreadsheets, and databases are good examples of such complementary forms of information technology. While their impact on business has been immense, they are not usually referred to as AI, and in recent years the success story that they embody has been submerged by a yearning for something more intelligent. This yearning is poorly defined, however, and with surprisingly little attempt to develop an alternative vision, it has increasingly come to mean surpassing human performance in tasks such as vision and speech, and in parlor games such as chess and Go. This framing has become dominant both in public discussion and in terms of the capital investment surrounding AI.

Economists and other social scientists emphasize that intelligence arises not only, or even primarily, in individual humans, but most of all in collectives such as firms, markets, educational systems, and cultures. Technology can play two key roles in supporting collective forms of intelligence. First, as emphasized in Douglas Engelbarts pioneering research in the 1960s and the subsequent emergence of the field of human-computer interaction, technology can enhance the ability of individual humans to participate in collectives, by providing them with information, insights, and interactive tools. Second, technology can create new kinds of collectives. This latter possibility offers the greatest transformative potential. It provides an alternative framing for AI, one with major implications for economic productivity and human welfare.

Businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare. This is exactly what Adam Smith understood more than two and a half centuries ago. Translating his message into the current debate, technology should focus on the complementarity game, not the imitation game.

We already have many examples of machines enhancing productivity by performing tasks that are complementary to those performed by humans. These include the massive calculations that underpin the functioning of everything from modern financial markets to logistics, the transmission of high-fidelity images across long distances in the blink of an eye, and the sorting through reams of information to pull out relevant items.

What is new in the current era is that computers can now do more than simply execute lines of code written by a human programmer. Computers are able to learn from data and they can now interact, infer, and intervene in real-world problems, side by side with humans. Instead of viewing this breakthrough as an opportunity to turn machines into silicon versions of human beings, we should focus on how computers can use data and machine learning to create new kinds of markets, new services, and new ways of connecting humans to each other in economically rewarding ways.

An early example of such economics-aware machine learning is provided by recommendation systems, an innovative form of data analysis that came to prominence in the 1990s in consumer-facing companies such as Amazon (You may also like) and Netflix (Top picks for you). Recommendation systems have since become ubiquitous, and have had a significant impact on productivity. They create value by exploiting the collective wisdom of the crowd to connect individuals to products.

Emerging examples of this new paradigm include the use of machine learning to forge direct connections between musicians and listeners, writers and readers, and game creators and players. Early innovators in this space include Airbnb, Uber, YouTube, and Shopify, and the phrase creator economy is being used as the trend gathers steam. A key aspect of such collectives is that they are, in fact, marketseconomic value is associated with the links among the participants. Research is needed on how to blend machine learning, economics, and sociology so that these markets are healthy and yield sustainable income for the participants.

Democratic institutions can also be supported and strengthened by this innovative use of machine learning. The digital ministry in Taiwan has harnessed statistical analysis and online participation to scale up the kind of deliberative conversations that lead to effective team decisionmaking in the best managed companies.

More here:

The Turing Test Is Bad For Business - News Nation USA

The funny formula: Why machine-generated humor is the holy grail of A.I. – Digital Trends

In The Outrageous Okona, the fourth episode of the second season of Star Trek: The Next Generation, the Enterprises resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ships Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

How tough? Forget Go, Jeopardy!, chess, and any number of other impressive demos: According to some experts, building an artificial intelligence on the level of a top comedian may be the true measure of machine intelligence.

And, while were not there yet, its safe to say that we may be getting a whole lot closer.

Joe Toplyn is someone who doesnt shy away from challenges. Toplyn, an engineer by training (with a large career gap in terms of actually practicing it), carved out a successful career for himself as a TV writer. A four-time Emmy winner, hes been a head writer for the likes of David Letterman and Jay Leno. Several years ago, Toplyn became interested in the question of whether or not there is an algorithm (i.e., a process or set of rules that can be followed) that would help write genuinely funny jokes.

People think its magic, he told Digital Trends. Some comedy writers or comedians, I think, try to portray what they do as performing magic. Well, it is like magic in the sense that a magic trick is constructed and designed, and theres a way that it works that fools you into thinking that the magician has supernatural powers. But theres really a logic to it.

This belief in a steely logic to joke-telling honed while Toplyn was trying to teach his magic to aspiring, would-be comedians ultimately led him to try building an A.I. able to generate off-the-cuff quips that fit into regular conversations. Called Witscript, the results add up to an innovative A.I. system that creates improvised jokes. A chatbot that uses Witscript to ad-lib jokes could, Toplyn said, help create likable artificial companions to help solve the huge problem of human loneliness. Think of it like PARO the robot seal with punch lines.

Its context-relevant, Toplyn said of Witscript, which was recently presented at the 12th International Conference on Computational Creativity (ICCC 2021). This sets it apart from other joke-generating systems that generate self-contained jokes that arent easy to integrate into a conversation. When youre talking with a witty friend, chances are that their jokes will be integrated into a conversation in response to something youve said. Its much less likely that your friend will just start telling a stand-alone joke like, A man walks into a bar with a duck on his head

This spontaneous quality comes from the joke-writing algorithms Toplyn himself developed.

Basically, the way the basic joke-writing algorithm works is this: It starts by selecting a topic for the joke, which could be a sentence that somebody says to you or the topic of a news story, he said. The next step is to select what I call two topic handles, the words or phrases in the topic that are the most responsible for capturing the audiences attention. The third step is to generate associations of the two topic handles. Associations are what the audience is likely to think of when they think about a particular subject. The fourth step is to create a punch line, which links an association of one of the two topic handles to an association of the other in a surprising way. The last step is to generate an angle between the topic and the punch line: A sentence or phrase that connects the topic to the punch line in a natural-sounding way.

If all these handles and angles sound like hard work, the proof is ultimately in the pudding. Using 13 input topics, Witscript generated a series of jokes, which Toplyn then pitted against his own efforts. For a review board, he outsourced the judging to Amazon Mechanical Turk workers, who graded each freshly minted joke on a scale of one (not a joke) to four (a very good joke). One of Witscripts best efforts garnered a 2.87 rating (Thats pretty close to being a joke, Toplyn said) to his own 2.80 as student beat master. The Witscript joke? Riffing on a line about the 25th anniversary of the Blue Man Group performance art company, it quipped: Welcome to the Bluebilee.

While perhaps not quite yet ready to displace Dave Chappelle, Toplyn believes that Witscript proves that humor can, to a degree, be automated. Even if theres still a long way to go.As machines get better at executing those algorithms, the jokes they generate will get better, he said.

However, he also struck a note of caution. To generate [truly] sophisticated jokes the way an expert human comedy writer can, machines will need the common-sense knowledge and common-sense reasoning ability of a typical human.

This, as it turns out, may be the crux of the matter. Humor might seem frivolous, but for those who work in the fields of language, comedy, and artificial intelligence, its anything but.

We use humor in a lot of different ways, Kim Binsted, a professor in the Information and Computer Sciences Department at the University of Hawaii, told Digital Trends. We use it to establish social rapport. We use it to define in-groups and out-groups. We use it to introduce ideas that we might not be willing to express seriously. Obviously, theres nonlinguistic humor, but [linguistic humor] falls into a category of language use that is really powerful. It isnt just a stand-up on stage who uses it to get a few laughs. Its something that we use all the time [within our society.]

It is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.

When it comes to computational humor, Binsted is a pioneer. In the 1990s, she created one of (possibly the) first A.I. designed to generate jokes. Developed with Professor Graeme Ritchie, Binsteds JAPE (Joke Analysis and Production Engine) was a joke-generating bot that could create question-and-answer puns. An example might be: Q) What do you call a strange market? A) A bizarre bazaar.

It was great because it meant I could pick all the low-hanging fruit before anyone else, she said modestly. Which is pretty much what I did with puns.

Since then, Binsted has developed various other computational humor bots including one able to dream up variations on Yo mama jokes. While Binsteds work has since evolved to look at long-duration human space exploration, she still views joke-telling A.I. as a sort of holy grail for machine intelligence.

Its not one of these things like chess, where when A.I. was starting out, people said, Well, if a computer can ever really play chess, then we will know its fully intelligent, she opined. Obviously, thats not the case. But I do think humor is one of those things where fluent humor using a computer is going to have to be genuinely intelligent in other ways as well.

This is why joke-telling is such an interesting challenge for machines. Its not because making an A.I. crack wise is as useful to humanity as, say, using machine intelligence to solve cancer. But it is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.

Humor depends on many different human skills, such as world knowledge, linguistic abilities, reasoning, [and more], Thomas Winters, a computer science Ph.D. student researching artificial intelligence and computational humor, told Digital Trends. Even if a machine has access to that kind of information and skills, it still has to have insight into the difficulty of the joke itself. In order for something to be funny, a joke also has to be not too easy and not too hard for a human to understand. A machine generating jokes should not use too obscure knowledge, nor too obvious knowledge with predictable punch lines. This is why computational humor is usually seen as an A.I.-complete problem. [It means] we need to have A.I that has functionally similar components as a human brain to solve computational humor, due to its dependency on all these skills of the human brain.

Think of it like a Turing Test with a laugh track. Coming soon to a superintelligence near you. Hopefully.

Read more:

The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends

Hit or Miscreant: Thunder Force and In the Earth – River Cities Reader

THUNDER FORCE

If a comedy isn't at all satisfying, yet somehow also isn't the least bit disappointing, then what the hell is it? I'm not sure, but it probably stars Melissa McCarthy, most likely as directed by her real-life husband Ben Falcone. These two have made so many lame movies together since 2014 Tammy, The Boss, Life of the Party that it wasn't until this past weekend that I learned another one, Superintelligence, had actually dropped on HBO Max in the fall. (To be fair, I have no idea if that film is lame or not; I'll probably give it a watch one of these years.) Amazingly, a fifth, titled Thunder Force, landed on Netflix just over a week ago. How do McCarthy and Falcone maintain the energy to keep churning these things out? And why can't I ever seem to hate these obvious, awkward, badly scripted, lazily directed outings with the fervor they deserve?

The chief answer, of course, is McCarthy, who, barring an occasional train wreck in the vein of Identity Thief, seems nearly impervious to weak material. But while she and Falcone have yet to craft a star vehicle that could be described as inventive or ingenious or, you know, good, they at least understand that modern farces are like bags of microwave popcorn: If enough bits have a tasty crunch, you don't much mind the duds. Every once in the while, comedies that are legitimately great from beginning to end my list of late would include Palm Springs, Game Night, and Borat Subsequent Moviefilm still manage to get released. In an entertainment world ever-more dominated by familiar intellectual property, though, McCarthy's and Falcone's continued mission to provide cheer through inarguably silly business, ridiculous non sequiturs, and gifted actors willing to look foolish isn't just brave, but kind of admirable. By all means, enjoy the big trademarked ape laying into the big trademarked lizard, or all nine-and-a-half hours of the reconstructed Justice League. Crummy movie or not, I'd rather spend my time watching Thunder Force's Jason Bateman attempt to hold a wine glass with enormous crab claws, or McCarthy imitate Jodie Foster in Nell a quarter-century past that gag's expiration date.

Given that their latest finds a pair of novice superheroes attempting to rid Chicago of a team of nefarious supervillains called Miscreants, you might think director Falcone (who also wrote the script) and McCarthy were initiating a send-up of Hollywood's whole intellectual-property stasis. They're really not, though. It's just dumb jokes, as usual. After some nominal backstory establishes 1983 as the year that outer-space gamma rays (or whatever) gave miraculous abilities to all of our planet's sociopaths, Thunder Force picks up in the present day, where one-time childhood friends Lydia and Emily the former a blue-collar boor, the latter a shy brainiac reunite. Hoping to avenge her parents' 1983 deaths at the hands of an ber-baddie, Octavia Spencer's Emily has invented serums that will grant non-psychotic Earthlings powers of their own. Yet before the scientist is able to turn herself into a superheroine, McCarthy's Lydia, Nosey Nellie that she is, accidentally gets several needles' worth of a super-strength vaccine, leaving the aghast Emily stuck only with the one for invisibility. A pair of tacky, increasingly smelly leather suits later, and Lydia and Emily are no longer mildly hostile pals. They're Thunder Force!

What happens next couldn't possibly matter less. The women train for active hero duty with varying degrees of success (and, on McCarthy's part, lots of predictable slapstick). Bobby Cannavale plays a megalomaniacal mayoral candidate with plans to assassinate everyone who didn't vote for him. A henchwoman unimaginatively named Laser (Pom Klementieff) does the fiend's bidding through glowing balls of destructive energy. Emily's teenage daughter (Taylor Mosby) frets for her mom's safety. Melissa Leo excuse me, Academy Award winner Melissa Leo plays the least surprising turncoat in superhero history. The Cubs and the Bears and the Bulls are all dutifully name-checked. Glenn Frey, for some reason, peppers the soundtrack. And Bateman shows up as a snarky smoothie with, yes, crustacean claws, the actor's always-welcome presence also accidentally reminding us of the nightmare that was Identity Thief. The whole thing is stupid. It's desperately formulaic. It's never remotely clear whether the film was designed for family audiences, as the remedial plotting and juvenile puns would suggest, or irony- and nostalgia-minded adults. (Seriously: Nell?!) And I'd be lying if I said I didn't giggle out loud on at least two dozen occasions.

It's entirely possible, of course, that we're simply conditioned to expect less from comedies that debut on our TVs and phones movies we can safely watch in between bill paying and house cleaning without also fearing that we're missing anything. Ironically, though, you actually will miss things if you tune in and out of Thunder Force, because what makes the film modestly worthwhile (or at least harmless) is its frequent supply of charming, goofy, throwaway gags of the sort that similarly enlivened Tammy and The Boss and Life of the Party. She's only in the film for about 60 seconds, but I wouldn't have wanted to miss Sarah Baker, as Emily's politely firm receptionist, telling Lydia there was no earthly way she'd be allowed into the scientist's lab only to be told by her employer that, yep, she was indeed allowed. (Not how I thought that one was gonna go!) I'm glad my attention didn't wander when Lydia made fun of Laser's achingly slow walk down some steps, and when Brendan Jennings failed miserably at a simple knock-knock joke, and every time Cannavale bristled atbeing called King instead of The King. ("That's adog'sname!")And I was super-happy not to have been denied the sight of Bateman crab-scooting sideways out of danger, to say nothing of the moment in which the guy admitted to Lydia that, as a Miscreant, he was really only half-'Creant, and she mistook the description for half-Korean. Later, Emily makes the same mistake, and it's actually funnier the second time around.

Between the cornball routines and wan sentimentality and cheesy effects and Falcone's bizarre habit of almost never positioning his camera where it needs to be, there's every reason to want to give up on Thunder Force, and despite giving a completely decent and committed performance, Spencer doesn't develop much of a rapport with McCarthy. (In real life, the women are apparently good friends, but here, they don't appear to have even met prior to filming.) Yet the movie's star, as ever, emerges unscathed, and even though she's played variants on this role what feels like dozens of times before, McCarthy and her game cast deliver just enough good-natured fun that you can enjoy Falcone's most recent offering nearly all the time without ever genuinely liking it. The ability to produce that kind of irrational response has gotta be its own kind of superpower.

IN THE EARTH

Written and directed by Ben Wheatley, and filmed over the course of 15 days last August, the torture-porn-cum-eco-thriller In the Earth is one of only a handful of cineplex releases over the last year that I've actively hated. It starts well, and with maximum familiarity, as scientist Martin Lowery (Joel Fry) enters a research facility filled with mask-wearing professionals and explains that he's out of practice dealing with other people, having spent the last four months indoors, alone, as a result of a global pandemic. (Join the club, Martin.) Before long, he and a park ranger (Ellora Torchia) are scouring the English woods for unspecified equipment and supplies, as well as running afoul of a mysterious hermit (Reece Shearsmith) who looks like David Strathairn as co-opted by ZZ Top. And for the rest of Wheatley's interminable experimental freakout, nothing about this gruesome assault on our equilibrium ever bothers to cohere. At one point, a fourth character played by Hayley Squires tells our leads Don't try to make sense of this, but the advice lands about an hour too late.

So no, I can't tell you what, precisely, In the Earth is about, although there is some metaphysical hooey about the forest being a living entity that lures unsuspecting visitors to their deaths. But I can tell you what I saw and heard. Martin, screaming loudly, having his foot cut open in gory closeup. Martin, screaming even louder, having several toes hacked off. Martin, screaming even louder, suffering through the subsequent cauterization. (Foot fetishists may have to take to their fainting couches.) Savage beatings. Druggings. A medical tool sticking out of an eye socket. Hackneyed horror imagery out of The Blair Witch Project. A number of eardrum-busting, nausea-inducing sound-and-light shows that are never explained and absolutely immaterial to the narrative. (A pre-film title card warns epileptics about the strobe effects to follow, but doesn't warn sentient beings of the nonsense to follow.) And quite possibly the least satisfying wrap-up to a horror flick in modern history, with only the stalwart performers and composer Clint Mansell's synth-heavy, John-Carpenter-on-Lithium score making the experience even borderline-bearable. Most confoundingly of all, the pandemic-y preamble actually has no bearing on anything that happens in In the Earth, which could just as easily have unfurled without the gimmicky addition of a real-world health crisis. Then again, if any cinematic work ever felt like the laboriously unpleasant result of cabin fever, Wheatley's would be the one.

More here:

Hit or Miscreant: Thunder Force and In the Earth - River Cities Reader

No AI Overlords?: What Is Larson Arguing and Why Does It Matter? – Walter Bradley Center for Natural and Artificial Intelligence

Yesterday, we were looking at the significance of AI researcher Erik J. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, contrasting it with claims that AI will merge with or replace us. Some such claims are made by industry insiders like Ray Kurzweil. But more often we hear them from science celebs like the late Stephen Hawking and Richard Dawkins, who, on these topics, are more known than knowledgeable.

So why does Larson think they are wrong? He offers two arguments. First, as information theorist William Dembski explains, is that there are some kinds of thinking that, by their nature, computers dont do:

With regard to inference, he shows that a form of reasoning known as abductive inference, or inference to the best explanation, is for now without any adequate computational representation or implementation. To be sure, computer scientists are aware of their need to corral abductive inference if they are to succeed in producing an artificial general intelligence.

True, theyve made some stabs at it, but those stabs come from forming a hybrid of deductive and inductive inference. Yet as Larson shows, the problem is that neither deduction, nor induction, nor their combination are adequate to reconstruct abduction. Abductive inference requires identifying hypotheses that explain certain facts of states of affairs in need of explanation. The problem with such hypothetical or conjectural reasoning is that that range of hypotheses is virtually infinite. Human intelligence can, somehow, sift through these hypotheses and identify those that are relevant. Larsons point, and one he convincingly establishes, is that we dont have a clue how to do this computationally.

Abduction? Heres an example, one of a series:

Example # 1 Suppose you have two friends, David and Matt, who recently had a fight that ended their friendship.

Shortly afterwards, someone tells you that you saw David and Matt together at the movies. The best explanation for what they just told you is that David and Matt made peace and are friends again.

In all the examples presented, the conclusions do not logically derive from the premises.

In example 1, about David and Matt, if we accept that both premises are true, it could be that these two examinations were casually seen in the cinema. In addition, we do not have statistics on fights or friendship.

The conclusion that they are friends again is not logical, in fact, but it is the Better explanation Possible for the fact that they have been seen together. The same applies to all other cases. What is an abductive argument? (With examples), Life Persona

Abduction is often called an inference to the best explanation. Computers have difficulty with this type of decision-making, probably because it is not strictly computational. There is nothing to compute. A different sort of thought process is at work.

Dembski continues,

His other argument for why an artificial general intelligence is nowhere near lift-off concerns human language. Our ability to use human language is only in part a matter of syntactics (how letters and words may be fit together). It also depends on semantics (what the words mean, not only individually, but also in context, and how words may change meaning depending on context) as well as on pragmatics (what the intent of the speaker is in influencing the hearer by the use of language).

Larson argues that we have, for now, no way to computationally represent the knowledge on which the semantics and pragmatics of language depend. As a consequence, linguistic puzzles that are easily understood by humans and which were identified over fifty years ago as beyond the comprehension of computers are still beyond their power of comprehension. Thus, for instance, single-sentence Winograd schemas, in which a pronoun could refer to one of two antecedents, and where the right antecedent is easily identified by humans, remains to this day opaque to machines machines do no better than chance in guessing the right antecedents. Thats one reason Siri and Alexa are such poor conversation partners.

Heres an example of a Winograd schema:

[f]rom AI pioneerTerry Winograd:The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Theres a verb choice quiz embedded in the sentence, and the task for System A is to select the right one. If System A has common sense, the answer is obvious enough.

Strangely, not only squirrels with superhuman memories but advanced AI systems running on IBM Blue Gene supercomputers (who might play fabulous chess), hit brick walls with such questions. The quiz, as originally put by Winograd, so flummoxes modern AI that another AI pioneer, the University of TorontosHector Levesqueand colleagueErnest Davis,devised an test for AI based on the Winograd Schema, as it came to be called. The focus is on the pronouns in a sentence, for example, they. Thus the updated question reads:

The city councilmen refused the demonstrators a permit because they feared violence.Who feared violence?

Readers find it easy to supply the right noun or noun phrase, the city councilmen. Its obviousits just common sensewho else would fear violence?

But now change the verb to advocated and the common sense stays, but the answer changes (the demonstrators). Winograd Schema quizzes are small potatoes to almost any native speaker of English past the age of (what?) five? ten?. But it repeatedly flummoxes any and all the AI systems that are purporting to be charging inexorably toward superintelligence. It seems like theres a small problem with the logic here if such systems fail on easy language questions and they do. Analysis, Superintelligent AI is still a myth at Mind Matters News

The contest was abandoned in 2016 when the Google Brain teams computing power got nowhere with this type of problem. Larson recounts some misadventures this type of deficit has generated: Microsofts trashmouth chatbot Tay and University of Readings smartmouth chatbot Eugene Goostman were fun. Mistaking a school bus for a punching bag (which happened in a demonstration) would not be so funny.

There are workarounds for these problems, to be sure. But they are not problems that bigger computers and better programming can simply solve. Some of the thought processes needed are just not computations, period.

Dembski (pictured) concludes, After reading this book, believe if you like that the singularity is right around the corner, that humans will soon be pets of machines, that benign or malevolent machine overlords are about to become our masters. But after reading this book, know that such a belief is unsubstantiated and that neither science nor philosophy backs it up.

Next: Why did a prominent science writer come to doubt the AI takeover?

You may also wish to read the first part: New book massively debunks our AI overlords: Aint gonna happen AI researcher and tech entrepreneur Erik J. Larson expertly dissects the AI doomsday scenarios. Many thinkers in recent years have tried to stem the tide of hype but, as Dembski points out, no one has done it so well.

Visit link:

No AI Overlords?: What Is Larson Arguing and Why Does It Matter? - Walter Bradley Center for Natural and Artificial Intelligence

President Arif Alvi calls for promotion of book reading culture to suppress biases – The Nation

ISLAMABAD-President Dr Arif Alvi on Sunday called for promotion of book reading culture to enhance knowledge and suppress biases.

The president, in a video message, advised the people particularly the youth to develop the habit of book reading to enhance their exposure and help suppress the biases.

He said he studied books on current affairs and computer sciences and tried to disseminate his knowledge to the masses through his speeches.

The president shared a list of 10 of the books he read last year encompassing the subjects including Islamic history, capitalism, history of subcontinent, upheaval and exploitation in nations, human psychology, riots in India, metric society, physics and artificial intelligence. He said besides the books recommended by him, the people should also read the useful material of their choice.

The president said when it comes to obtaining huge knowledge published in the form of books, he considered his life too short to comprehend it all. He also showed the notes of various books he took during the reading which he hoped to compile at a later stage.

I do nothing else but read books during travel, be it by car or air. I study book as I go home.

Though I get a little chance to read books in office, I do it when there is some gap, he remarked.

The president, who had also shared his best reads, put at top a book Revelations by Meraj Mohiuddin presenting a wide variety of scholarly viewpoints on the story of Holy Prophet Muhammad (Peace be Upon Him) and Quranic revelation.

He also strongly recommended Art of Thinking Clearly by Rolf Dobelli on human psychology and reasoning. The book points out around 100 biases found in humans, meant for self-examination and methods to avoid them.

Other books recommended by the president included Capital and Ideology by Thomas Piketty (on economic inequalities and suggesting outlines for fairer economic system), Anarchy by William Dalrymple (on rise of East India Company and economic prosperity in subcontinent), Upheaval: Turning Points for Nations in Crisis by Jared Diamond (history of different nations like Japan, Soviet Union and their efforts to overcome crises), Why Nations Fail: The Origins of Power, Prosperity, and Poverty by Daron Acemoglu and James Robinson (exploitative society which begets change and leads to revolution).

He said currently, the same battle is going on in Pakistan against the corrupt rulers and system.

President Alvi also recommended The Metric Society: On the Quantification of the Social by Steffen Mau (measurement and evaluation in the society), and The Big Picture by Sean M. Carroll, Sean B. Carroll (scientific worldview and exploration of God).

The tech-savvy president suggested the people to study Life 3.0: Being Human in the Age of Artificial Intelligence by Deckle Edge (how science is advancing and becoming a challenge for humanity) and Superintelligence by Nick Bostrom (growing influence of machines and possible replacement of humans by super intelligence).

Other than his 10 best reads, the president also suggested The End of India by Khushwant Singh (communal violence in Gujarat in 2002 and rise of religious fundamentalism in India), Gujarat Files by Rana Ayub, Allama Iqbals The Reconstruction of Religious Thought in Islam and MJ Akbars Riot After Riot: Reports on Caste and Communal Violence in India.

Other books recommended by the president included Protocol by Capricia Penavic Marshall, Makers of Modern Sindh by Dr Muhammad Ali Shaikh (on life of Sindhs prominent figures), A Promised Land by Barack Obama and Jinnah and Tilak: Comrades in the Freedom Struggle by AG Noorani and Tuba by Shah Baligh-ud-Din, The Last Mughal by William Dalrymple and World Enough and Time by Sultan Muhammad Shah Aga Khan.

See more here:

President Arif Alvi calls for promotion of book reading culture to suppress biases - The Nation

A New Study Finds the Limits of Humans’ Ability to Control AI – HYPEBEAST

Humans could not stop an artificially intelligent machine from making its own decisions or predict what decisions it might make, according to a recent study out of the Max-Planck Institute for Humans and Machines. Study co-author and research group leader Manuel Cebrian understands that the concept of a human-built machine humans do not understand may sound absurd to some, but he explains that such technology is already in existence.

There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity, says study co-author Manuel Cebrian, per Business Insider.

Superintelligence poses different barriers than most subjects of robot ethics given their ability to adapt beyond the original scope of their programming. So, to study the problem, the research group conceived of a theoretical calculation called a containment algorithm to see whether an artificial intelligence could be controlled by programming it not to harm humans under any circumstances and to halt if the action is considered harmful. However, the researchers found that within the current bounds of computing, an air-tight algorithm to this effect could not be created; as the research group states the containment problem is incomputable.

If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable, explains Director of the Center for Humans and Machines Iyad Rahwan.

And to extend this line of reasoning, we therefore also may not be able to predict when super-intelligent machines will evolve or even known we theyve arrived. Scientists, including at times controversial figures like Elon Musk have warned in recent years about the more nefarious potentials of powerful AI, and these questions and fears are hardly new among casual followers of tech news. Innovation in the sphere nonetheless continues, with recent project like Mercedes-Benzs 56-inch artificial intelligence hub.

Follow this link:

A New Study Finds the Limits of Humans' Ability to Control AI - HYPEBEAST

Vue Cambridge: All the films coming to the cinema including Wonder Woman 1984 and Superintelligence – Cambridgeshire Live

Film lovers in Cambridge can look forward to a host of hotly-anticipated releases this month, as new blockbusters arrive at Vue cinema in the city.

Football fans are also being catered for, as Vue broadcasts BT Sports live Premier League coverage on the big screen.

A selection of classic Christmas content is also available, so that everyone has something to look forward to.

Read below to see what is coming to the cinema soon.

The long-awaited cinema release of Wonder Woman 1984 is finally here.

Starring Gal Gadot once again in the title role, the DC Comics epic sees Wonder Woman face off against Maxwell Lord and Cheetah, a villain possessing superhuman strength.

In Superintelligence , starring Melissa McCarthy as Carol Peters, is a former corporate executive who has her humble life turned upside down when she is selected for observation by an artificial intelligence that may or may not take over the world.

Dreamland is a thriller about a young farm boy who helps a seductive fugitive hiding in his small town during the Great Depression.

Vue Cambridge also broadcasts BT Sports live Premier League coverage.

On Saturday 19 December, fans can watch Crystal Palace vs Liverpool on the big screen from 12:30.

Alan Carter, General Manager at Vue Cambridge, said: Were thrilled to have top quality new content amongst our selection of shows this month. Our customers have been waiting for the return of huge releases like Wonder Woman 1984 to the big screen and we hope they will be as excited as we are and look forward to returning to the cinema.

We also continue to have a fantastic selection of classic Christmas content available for festive film-lovers. With all thats going on, we hope theres something for everyone to escape to.

Tickets are available now at myvue.com. For more information on the safety measures being implemented, customers can visit myvue.com/stay-safe.

For more information on Big Screen Sports at Vue, customers can visit: myvue.com/big-screen-events/sports.

Excerpt from:

Vue Cambridge: All the films coming to the cinema including Wonder Woman 1984 and Superintelligence - Cambridgeshire Live

Nomadland: Why this Frances McDormand-starrer is the perfect film for 2020 – Stuff.co.nz

Nomadland (M, 108 mins) Directed by Chloe Zhao ****

Sometime in late 2021 perhaps Chloe Zhao will become known as the director of Eternals, which is an upcoming Marvel studios behemoth starring Angelina Jolie and many others, no doubt prancing around the galaxy in spandex in the hopes of saving the earth from the clutches of some evil wizard. Or so I imagine.

But until then, Zhao has been free to work in relative anonymity and turn out three disparate, but quietly beautiful films. Her Songs My Brothers Taught Me and The Rider were both set in Indigenous communities, far away from any vision of North America that usually makes it to our screens. Both featured non-actors in leading roles, portraying people based closely on themselves.

After a screening of The Rider, Zhao met Frances McDormand. McDormand had already optioned Jessica Bruder's non-fiction book Nomadland: Surviving America in the 21st Century and so a perfect creative collision of the right director being in charge of the right material occurred.

READ MORE:* The Dry: A worthy cinematic adaptation of Jane Harper's award-winning novel* Superintelligence: Melissa McCarthy's new comedy offers heart and decent laughs* The Midnight Sky: George Clooney's Netflix drama assembled from familiar parts* The Mystery of DB Cooper: A fun, enthralling look at an American legend

Supplied

Frances McDormand plays Fern in Nomadland.

Nomadland, as with Zhao's previous films, is fictional, but only just. It is based on true stories of the road, of people who were cast out of work and security in the aftermath of the 2008 financial crisis and who chose never to return. They are modern nomads, living a life John Steinbeck would recognise, travelling across the continent to where the seasonal work is, independent, but still a part of a mutable community of like-minded travellers.

McDormand is here as Fern. Her company-town life in Nevada has ceased to exist and her husband died years before. With enough money to kit out a van, but nothing like enough to buy even a cheap house, Fern hits the road, eventually meeting the people she needs to, to become a part of the community who call themselves Nomadland.

Many of these people are on-screen as themselves. Among the few professional actors, David Strathairn is wonderful as a potential partner for Fern.

Supplied

Nomadland is a paean to all people who have re-examined their lives, shifted their priorities and rediscovered the profound magic of empathy and quiet resilience during the past year.

Zhao's film is a slow-cooked triumph of detail and watchfulness over spectacle and drama. And yet, in these tiny, perfectly observed human stories, there is more being said about the broken state of America and its systems and values today, than in a dozen noisier, more attention seeking dramas.

With award season on its way, it's probably, deservedly, inevitable that Nomadland will be getting all the publicity it needs.

Personally, I found it to be pretty much the perfect film for 2020; a paean to all people who have re-examined their lives, shifted their priorities and rediscovered the profound magic of empathy and quiet resilience. You'll quite probably love it too.

Nomandland begins screening in select New Zealand cinemas on Boxing Day.

Read more:

Nomadland: Why this Frances McDormand-starrer is the perfect film for 2020 - Stuff.co.nz

"I should be the main character". The plot twist that led to Melissa McCarthy’s new role. – Mamamia

"Originally it was written for a malelead and Melissa was going to be the voice," Ben, 47, told Mamamia. "Then she kept reading the script and started saying 'I think I should be the main character' because she just fell in love with the story.

"I was so taken with the story that I just kept worming my way into it," Melissa, 50, agreed. "I just kept nudging him by saying 'Well... Im not doing anything.' I wanted to throw my name in the hat and see what could happen there.

"I loved the idea of this character. She is trying so hard but it's not perfect, because nothing is. We wanted to tell a story where the audience would be really rooting for somebody because things are scary but also hopeful. I wanted to insert myself into that story.

"So I asked Ben (she says, gesturing wildly at her husband) and he didnt say no. I havent been fired yet."

"And then it turned out great," Ben added. "Then we thought of Bobby immediately for the part of Carol's love interest George. Which is a wonderful way to put a movie together because you just call a friend and say, 'Do you want to do it?'"

The couple's humourous relationship dynamic spilled onto the set and soon they were improvising new scenes and slices of dialogue that found their way into the movie.

In Superintelligence,Ben once again plays a small role alongside his wifeby taking on the character of NSA Agent Charles Kuiper (although, I don't think it's destined to be quiteas iconic as his Bridesmaids cameo as Air Marshal Jon), and they both name those shared scenes as their favourites to film.

"I did love the scene where you guys threw me in a van and took me to a warehouse," Melissa said, turning to Ben and laughing. "Just that whole Law and Ordertype moment. Its where the plot of the movie really shifts and you start thinking, 'Something really dangerous is going on.'"

She continued: "And just watching Ben play an FBI agent was so funny. We just improvised so much stuff. Most of whats in that part of themovie is improvisation. It was so hard for me to stay in character and looked like I was frightened because he was doing and saying the most insane things.

"Its a good day at the office when you can barely keep it together."

At this point in the interview, Bobby Cannavale turns to Ben, confused, and says, "Are you sure youre in the movie? When do you show up?"

"Did you read it?" Melissa asks, feigning shock.

See the rest here:

"I should be the main character". The plot twist that led to Melissa McCarthy's new role. - Mamamia

Finally! HBO Max comes to Roku (and PS5) – here’s how to set it up – Komando

If you havent seen Game of Thrones or The Sopranos (where have you been?) and you own a Roku streaming device, we have some good news for you! HBO Max is finally available on Roku, and you can download the app right now. Tap or click here to see the top streaming devices compared.

By simply logging in to your account, you will be able to watch the new Wonder Woman 1984 on Christmas Day, as well as The Matrix 4, Dune and The Suicide Squad next year.

Launched seven months ago, Roku is one of the last streaming devices to add HBO Max. Lets go over how to add the popular streaming service to your device.

The channel can be downloaded in two different ways: either from the Roku website; or from the channel store on the device.

RELATED: 8 Roku pro tips you need to try right now

If you are an existing subscriber on Roku, you dont have to do anything. The HBO channel will automatically update to HBO Max.

With Wonder Woman 1984 available for streaming on Christmas Day, you might be wondering what else is available to watch.

Included with all original HBO shows, you also get access to all Warner Bros. films set for release next year, DC and Adult Swim content. The brand-new Space Jam: A New Legacy will also be made available for viewing.

RELATED: Is your TV watching everything you do?

For wintery binge-watching days, complete seasons of Friends, The Fresh Prince of Bel-Air and The Big Bang Theory are all available.

A subscription to HBO Max also affords you access to Lovecraft Country, The Undoing, The Flight Attendant, Superintelligence and Search Party.

In total, there are more than 10,000 hours of viewable content for you to enjoy.

HBO Max has also been made available for Sonys latest console, the PlayStation 5. Out of all the major streaming services, HBO was the only one that wasnt previously available for PS5.

X

Learn the tech tips and tricks only the pros know.

If you already subscribe to HBO Now or HBO through a participating provider, you dont have to pay extra to upgrade to HBO Max. New subscribers can pick up the full package at $14.99 per month. In addition to Roku and PS5 systems, the service is also available on Comcasts Xfinity X1 and Xfinity Flex, Amazons Fire TV and Fire Tablet devices and Apple TV.

Read the rest here:

Finally! HBO Max comes to Roku (and PS5) - here's how to set it up - Komando