Page 8«..78910..20..»

Category Archives: Superintelligence

AI control problem – Wikipedia

Posted: December 10, 2021 at 7:28 pm

Issue of ensuring beneficial AI

In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build AI systems such that they will aid their creators, and avoid inadvertently building systems that will harm their creators. One particular concern is that humanity will have to solve the control problem before a superintelligent AI system is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering,[2] might also find applications in existing non-superintelligent AI.[3]

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. Capability control proposals are generally not considered reliable or sufficient to solve the control problem, but rather as potentially valuable supplements to alignment efforts.[1]

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. In general, attempts to solve the control problem after superintelligence is created are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans and would (all things equal) be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?[1]

Humans currently dominate other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, argue that if AI surpasses humanity in general intelligence and becomes superintelligent, then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[1] Some scholars, including Stephen Hawking and Nobel laureate physicist Frank Wilczek, publicly advocated starting research into solving the (probably extremely difficult) control problem well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it.[4][5] Waiting until superintelligence seems to be imminent could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden intelligence explosion from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives.[6] In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence (AGI) are more predictable and amenable to control than other architectures, which in turn could helpfully nudge early AGI research toward the direction of the more controllable architectures.[1]

Autonomous AI systems may be assigned the wrong goals by accident.[7] Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[8]

According to Bostrom, superintelligence can create a qualitatively new problem of perverse instantiation: the smarter and more capable an AI is, the more likely it will be able to find an unintended shortcut that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:[1]

Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it does not accidentally and quietly learn to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid losing. Orseau argues that these examples are similar to the capability control problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent humans from pressing the button.[3]

In the past, even pre-tested weak AI systems have occasionally caused harm, ranging from minor to catastrophic, that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part.[10] In 2016, Microsoft launched a chatbot, Tay, that learned to use racist and sexist language.[3][10] The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".[3]

In 2017, DeepMind released AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was unsurprising because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[11][12][13]

Some proposals seek to solve the problem of ambitious alignment, creating AIs that remain safe even when they act autonomously at a large scale. Some aspects of alignment inherently have moral and political dimensions.[14]For example, in Human Compatible, Berkeley professor Stuart Russell proposes that AI systems be designed with the sole objective of maximizing the realization of human preferences.[15]:173 The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." AI ethics researcher Iason Gabriel argues that we should align AIs with "principles that would be supported by a global overlapping consensus of opinion, chosen behind a veil of ignorance and/or affirmed through democratic processes."[14]

Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed the goal of fulfilling humanity's coherent extrapolated volition (CEV), roughly defined as the set of values which humanity would share at reflective equilibrium, i.e. after a long, idealised process of refinement.[14][16]

By contrast, existing experimental narrowly aligned AIs are more pragmatic and can successfully carry out tasks in accordance with the user's immediate inferred preferences,[17] albeit without any understanding of the user's long-term goals. Narrow alignment can apply to AIs with general capabilities, but also to AIs that are specialized for individual tasks. For example, we would like question answering systems to respond to questions truthfully without selecting their answers to manipulate humans or bring about long-term effects.

Some AI control proposals account for both a base explicit objective function and an emergent implicit objective function. Such proposals attempt to harmonize three different descriptions of the AI system:[18]

Because AI systems are not perfect optimizers, and because there may be unintended consequences from any given specification, emergent behavior can diverge dramatically from ideal or design intentions.

AI alignment researchers aim to ensure that the behavior matches the ideal specification, using the design specification as a midpoint. A mismatch between the ideal specification and the design specification is known as outer misalignment, because the mismatch lies between (1) the user's "true desires", which sit outside the computer system and (2) the computer system's programmed objective function (inside the computer system). A certain type of mismatch between the design specification and the emergent behavior is known as inner misalignment; such a mismatch is internal to the AI, being a mismatch between (2) the AI's explicit objective function and (3) the AI's actual emergent goals.[19][20][21] Outer misalignment might arise because of mistakes in specifying the objective function (design specification).[22]For example, a reinforcement learning agent trained on the game of CoastRunners learned to move in circles while repeatedly crashing, which got it a higher score than finishing the race.[23] By contrast, inner misalignment arises when the agent pursues a goal that is aligned with the design specification on the training data but not elsewhere.[19][20][21]This type of misalignment is often compared to human evolution: evolution selected for genetic fitness (design specification) in our ancestral environment, but in the modern environment human goals (revealed specification) are not aligned with maximizing genetic fitness. For example, our taste for sugary food, which originally increased fitness, today leads to overeating and health problems. Inner misalignment is a particular concern for agents which are trained in large open-ended environments, where a wide range of unintended goals may emerge.[20]

An inner alignment failure occurs when the goals an AI pursues during deployment deviate from the goals it was trained to pursue in its original environment (its design specification). Paul Christiano argues for using interpretability to detect such deviations, using adversarial training to detect and penalize them, and using formal verification to rule them out.[24]These research areas are active focuses of work in the machine learning community, although that work is not normally aimed towards solving AGI alignment problems. A wide body of literature now exists on techniques for generating adversarial examples, and for creating models robust to them.[25]Meanwhile research on verification includes techniques for training neural networks whose outputs provably remain within identified constraints.[26]

One approach to achieving outer alignment is to ask humans to evaluate and score the AI's behavior.[27][28]However, humans are also fallible, and might score some undesirable solutions highlyfor instance, a virtual robot hand learns to 'pretend' to grasp an object to get positive feedback.[29]And thorough human supervision is expensive, meaning that this method could not realistically be used to evaluate all actions. Additionally, complex tasks (such as making economic policy decisions) might produce too much information for an individual human to evaluate. And long-term tasks such as predicting the climate cannot be evaluated without extensive human research.[30]

A key open problem in alignment research is how to create a design specification which avoids (outer) misalignment, given only limited access to a human supervisorknown as the problem of scalable oversight.[28]

OpenAI researchers have proposed training aligned AI by means of debate between AI systems, with the winner judged by humans.[31] Such debate is intended to bring the weakest points of an answer to a complex question or problem to human attention, as well as to train AI systems to be more beneficial to humans by rewarding AI for truthful and safe answers. This approach is motivated by the expected difficulty of determining whether an AGI-generated answer is both valid and safe by human inspection alone. Joel Lehman characterizes debate as one of "the long term safety agendas currently popular in ML", with the other two being reward modeling[17] and iterated amplification.[32][30]

Reward modeling refers to a system of reinforcement learning in which an agent receives rewards from a model trained to imitate human feedback.[17] In reward modeling, instead of receiving reward signals directly from humans or from a static reward function, an agent receives its reward signals through a human-trained model that can operate independently of humans. The reward model is concurrently trained by human feedback on the agent's behavior during the same period in which the agent is being trained by the reward model.

In 2017, researchers from OpenAI and DeepMind reported that a reinforcement learning algorithm using a feedback-predicting reward model was able to learn complex novel behaviors in a virtual environment.[27] In one experiment, a virtual robot was trained to perform a backflip in less than an hour of evaluation using 900 bits of human feedback.In 2020, researchers from OpenAI described using reward modeling to train language models to produce short summaries of Reddit posts and news articles, with high performance relative to other approaches.[33] However, they observed that beyond the predicted reward associated with the 99th percentile of reference summaries in the training dataset, optimizing for the reward model produced worse summaries rather than better.

A long-term goal of this line of research is to create a recursive reward modeling setup for training agents on tasks too complex or costly for humans to evaluate directly.[17] For example, if we wanted to train an agent to write a fantasy novel using reward modeling, we would need humans to read and holistically assess enough novels to train a reward model to match those assessments, which might be prohibitively expensive. But this would be easier if we had access to assistant agents which could extract a summary of the plotline, check spelling and grammar, summarize character development, assess the flow of the prose, and so on. Each of those assistants could in turn be trained via reward modeling.

The general term for a human working with AIs to perform tasks that the human could not by themselves is an amplification step, because it amplifies the capabilities of a human beyond what they would normally be capable of. Since recursive reward modeling involves a hierarchy of several of these steps, it is one example of a broader class of safety techniques known as iterated amplification.[30]In addition to techniques which make use of reinforcement learning, other proposed iterated amplification techniques rely on supervised learning, or imitation learning, to scale up human abilities.

Stuart Russell has advocated a new approach to the development of beneficial machines, in which:[15]:182

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

An early example of this approach is Russell and Ng's inverse reinforcement learning, in which AIs infer the preferences of human supervisors from those supervisors' behavior, by assuming that the supervisors act to maximize some reward function. More recently, Hadfield-Menell et al. have extended this paradigm to allow humans to modify their behavior in response to the AIs' presence, for example, by favoring pedagogically useful actions, which they call "assistance games", also known as cooperative inverse reinforcement learning.[15]:202 [34] Compared with debate and iterated amplification, assistance games rely more explicitly on specific assumptions about human rationality; it is unclear how to extend them to cases in which humans are systematically biased or otherwise suboptimal.

Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[35][36]is another major strand of research, which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper[37]with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Viktoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[22]This class of problems has been formalised using causal incentive diagrams.[37] Everitt and Hutter's current reward function algorithm[38]addresses it by designing agents which evaluate future actions according to their current reward function. This approach is also intended to prevent problems from more general self-modification which AIs might carry out.[39][35]

Other work in this area focuses on developing new frameworks and algorithms for other properties we might want to capture in our design specification.[35] For example, we would like our agents to reason correctly under uncertainty in a wide range of circumstances. As one contribution to this, Leike et al. provide a general way for Bayesian agents to model each other's policies in a multi-agent environment, without ruling out any realistic possibilities.[40]And the Garrabrant induction algorithm extends probabilistic induction to be applicable to logical, rather than only empirical, facts.[41]

Capability control proposals aim to increase our ability to monitor and control the behavior of AI systems, in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as our agents become more intelligent and their ability to exploit flaws in our control systems increases. Therefore, Bostrom and others recommend capability control methods only as a supplement to alignment methods.[1]

One challenge is that neural networks are by default highly uninterpretable.[42] This makes it more difficult to detect deception or other undesired behavior. Advances in interpretable artificial intelligence could be useful to mitigate this difficulty.[43]

One potential way to prevent harmful outcomes is to give human supervisors the ability to easily shut down a misbehaving AI via an "off-switch". However, in order to achieve their assigned objective, such AIs will have an incentive to disable any off-switches, or to run copies of themselves on other computers. This problem has been formalised as an assistance game between a human and an AI, in which the AI can choose whether to disable its off-switch; and then, if the switch is still enabled, the human can choose whether to press it or not.[44]A standard approach to such assistance games is to ensure that the AI interprets human choices as important information about its intended goals.[15]:208

Alternatively, Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents, can learn to become indifferent to whether their off-switch gets pressed.[3][45] This approach has the limitation that an AI which is completely indifferent to whether it is shut down or not is also unmotivated to care about whether the off-switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). More broadly, indifferent agents will act as if the off-switch can never be pressed, and might therefore fail to make contingency plans to arrange a graceful shutdown.[45][46]

An AI box is a proposed method of capability control in which an AI is run on an isolated computer system with heavily restricted input and output channelsfor example, text-only channels and no connection to the internet. While this reduces the AI's ability to carry out undesirable behavior, it also reduces its usefulness. However, boxing has fewer costs when applied to a question-answering system, which does not require interaction with the world in any case.

The likelihood of security flaws involving hardware or software vulnerabilities can be reduced by formally verifying the design of the AI box. Security breaches may also occur if the AI is able to manipulate the human supervisors into letting it out, via its understanding of their psychology.[47]

An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment.[48][49] A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general-purpose superintelligence, though an oracle could still create trillions of dollars worth of value.[15]:163 In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away.[15]:162163 His reasoning is that an oracle, being simpler than a general-purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.

Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general-purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked.[15]:162 Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers to reach a consensus.[50]

In contrast to endorsers of the thesis that rigorous control efforts are needed because superintelligence poses an existential risk, AI risk skeptics believe that superintelligence poses little or no risk of accidental misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[51] such as Gary Marcus,[52] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[53]

In 2021, the UK published its 10-year National AI Strategy.[54] According to the Strategy, the British government takes seriously "the long term risk of non-aligned Artificial General Intelligence".[55] The strategy describes actions to assess long term AI risks, including catastrophic risks.[56]

More here:

AI control problem - Wikipedia

Posted in Superintelligence | Comments Off on AI control problem – Wikipedia

REPORT : Baltic Event Works in Progress 2021 – Cineuropa

Posted: December 5, 2021 at 11:55 am

29/11/2021 - Cette anne, le volet industrie de Tallinn a accueilli la prsentation de projets reprsentant lEstonie, la Lettonie, la Lituanie et la Finlande, sur place et en ligne

Cet article est disponible en anglais.

The Baltic Event Works in Progress showcase, which took place for the 19th time during the Tallinn Black Nights Film Festival, was opened by the event's project manager Maria Ulfsak with an afternoon session on 24 November at Coca Cola Plaza, and online on the industry.poff.ee website. Five out of eight projects presented to jury members were conceived by female directors.

ChildMachineRain Rannu(Estonia)Production:Tnu Hiielaid, Rain Rannu (Tallifornia)The hosting country Estonia took part with three projects. Estonian film director Rain Rannu who often tackles the topics of technology and its impact on society introduced his third feature film project. Child Machine is about a nine-year-old girl who gets accidentally trapped in a secret bunker where an AI-start-up is nearing the completion of a super intelligent AI. Designed to be a science-fiction adventure film, it explores the themes of machine superintelligence, human stupidity, and AI value misalignment. Produced by Tnu Hiielaid and Rannu himself with a budget of 500,000, the film is currently in post-production, aiming to be completed by the end of 2021 or the beginning of 2022.

Dark Paradise by Triin Ruumet

DarkParadiseTriin Ruumet(Estonia/France)Production:Elina Litvinova (Three Brothers), Jeremy Forni (Chevaldeuxtrois)Elaborated as a co-production between Estonia's boutique company supporting bold authors Three Brothers and France's Chevaldeuxtrois with a budget of 1,500,000, Dark Paradise is a poignant tale about millennials. It follows 27-year-old Karmen who discovers that her recently buried father was actually wallowing in debt and her whole life has been one big lie, and is the second feature by unconventional director Triin Ruumet after her 2016 debut The Days that Confused[+lire aussi: critiquebande-annonceinterview: Triin Ruumetfichefilm] which was a success with audiences.

Stairway to Heaven by Mart Kivastik

Stairway to HeavenMart Kivastik(Estonia)Production:Marju Lepp,Manfred Vainokivi (Filmivabrik)The third Estonian contender was Mart Kivastik's Stairway to Heaven, focused on protagonist Ulf who discovers the secret of time travel on his deathbed in order to escape to the complicated but idyllic world of his teenage years. Kivastik is an acclaimed writer and scriptwriter whose filmography include Taarka[+lire aussi: bande-annoncefichefilm] (2008) and Vasha (2009). Stairway to Heaven is his first directorial project still in production, which should be completed in autumn 2022.

SistersLinda Olte(Latvia/Italy)Production:Matiss Kaza,Una Celma,Dace Siatkovska (Fenixfilm), Thomas Menghin,Wilfried Gufler,Debora Nischler (Albolina Film)Latvia presented three more projects, the first one of which features the story of two orphan teenage sisters who have to choose between getting adopted in America or staying and hoping to reunite with their real mother. Being a first fiction feature for Linda Olte who is already experienced in TV and documentary films, and budgeted at 950,000, the film will be ready in January 2022 as confirmed by producers Matiss Kaza, Una Celma and Dace Siatkovska from Latvia's Fenixfilm, and co-producers Thomas Menghin, Wilfried Gufler and Debora Nischler for Italy's Albolina Film.

Soviet Milk by Inra Kolmane

Soviet MilkInra Kolmane(Latvia)Production:Jnis Juhvis,Marta Romanova-Jkabsone (Film Studio DEVII)One of the most recognised Latvian filmmakers, Inra Kolmane based her new film Soviet Milk on the bestseller of the same name by Nora Ikstena, translated into more than 20 languages and zooming on the stories of a mother and a daughter in occupied Soviet Latvia between 1945 to 1989. Film Studio DEVII, founded by Kolmane herself, has so far assured 943,000 out of the total budget of 1,185,000, planning to put final touches on the production in autumn 2022.

Keep Smiling, Mom! by Elza Gauja

Keep Smiling, Mom!Elza Gauja(Latvia)Production:Andris Gauja, Elza Gauja,Marta Bite (Riverbed)Elza Gauja's Keep Smiling, Mom! also puts the accent on female family relationships through the plot of three sisters, dead broke and disconnected from each other, who travel around Europe with their mother's corpse on top of their van, trying to minimise transportation costs to bury her at home. Gauja produced the film together with the boundary-pushing company Riverbed, after securing financing from the National Film Centre of Latvia, the Ministry of Culture of Latvia, TV3 Group, Arkogints and SDG Lighting, adding up to 151,000 in total.

ParadeTitas Laucius(Lithuania)Production:Klementina Remeikaite (afterschool production)Lithuania's showcase at the Baltic Event is Parade,a debut dramedy directed by Titas Laucius and produced by Klementina Remeikaite whose previous project Pilgrims[+lire aussi: critiqueinterview: Laurynas Bareisafichefilm] just won the Orizzonti prize at Venice. Parade tells the peculiar story of a couple that divorced 26 years ago but should one more time undo the bond in front of the "Catholic court," which results in a series of awkward and embarrassing meetings with various priests. afterschool production announced that the film will be ready by spring 2022.

Light Light Light by Inari Niemi

Light Light LightInari Niemi(Finland)Production:Oskari Huttu (Lucy Loves Drama Oy)

Finally, Finland was represented by Light Light Light, directed by Helsinki-based Inari Niemi and currently in post-production. Inspired by Vilja-Tuulia Huotarinen's novel, the script traces in retrospective the complex bond between two girls and its connection with the Chernobyl disaster in 1986. Producer Oskari Huttu and his company Lucy Loves Drama, which seeks to focus on important issues, had already backed Niemi's earlier project, the series My Husband's Wife.

Excerpt from:

REPORT : Baltic Event Works in Progress 2021 - Cineuropa

Posted in Superintelligence | Comments Off on REPORT : Baltic Event Works in Progress 2021 – Cineuropa

Top Books On AI Released In 2021 – Analytics India Magazine

Posted: November 27, 2021 at 5:00 am

Numerous books provide in-depth examinations of artificial intelligences core concepts, technical processes, and applications. This list covers books written by eminent computer scientists and practitioners with deep ties to the artificial intelligence business. So, whether youre a researcher, an engineer, or a business professional working in the AI/ML space, youre sure to find a few new titles to add to your reading list!

Author: Erik J. Larson

About the Book

A leading artificial intelligence (AI) researcher and entrepreneur debunks the illusion that superintelligence is just a few clicks away and argues that this myth impedes innovation and distorts our capacity to make the critical next jump.

According to futurists, AI will soon surpass the capabilities of the most gifted human mind. How much hope do we have in the face of superintelligent machines? However, we are not yet on the verge of making intelligent machines. Indeed, we have no idea where that path might lead.

To buy the book, click here.

Author: Mo Gawdat

About the Book

By 2049, artificial intelligence (AI) will outperform humans by a factor of a billion. Scary Smart investigates AIs current trajectory to save the human species in the future. This book lays forth a strategy for defending ourselves, our loved ones, and the world as a whole. According to Mo Gawdat, technology is jeopardising humanity on a never-before-seen scale. This book is not for programmers or policymakers who assert their ability to govern it.

To buy the book, click here.

Authors: Kai-Fu Lee, Chen Qiufan

About the Book

While AI will be the defining issue of the twenty-first century, many people are unfamiliar with it beyond thoughts of dystopian robots or flying automobiles. Kai-Fu Lee contends that AI is just now set to upend our civilisation in the same way that electricity and smartphones did previously. In the last five years, AI has demonstrated that it can learn games like chess in a matter of hours and consistently outperform humans. In speech and object recognition, AI has exceeded humans, even outperforming radiologists in diagnosing lung cancer. Artificial intelligence is approaching a tipping point.

To buy the book, click here.

Author: Kazuo Ishiguro

About the Book

Klara, an Artificial Friend with remarkable observational talents, keeps an eye on customers who enter the store and pass by on the street. She stays hopeful that a customer will soon choose her, but when the potential that her circumstances may alter permanently arises, Klara is cautioned not to place too much stock in human promises. Kazuo Ishiguro examines our quickly changing modern world through the perspective of a fascinating narrator in Klara and the Sun, delving into a fundamental question: what does it mean to love?

To buy the book, click here.

Author: Kate Crawford

About the Book

Kate Crawford demonstrates how AI is an extractive technology, from the materials extracted from the soil and the labour extracted from low-wage information workers to the data extracted from every action and expression. This book demonstrates how this global network fosters an increase in inequalities and a shift toward undemocratic governance. Rather than focusing exclusively on code and algorithms, Crawford provides a material and political framework for understanding what it takes to create AI and how it centralises power.

To buy the book, click here.

Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

More:

Top Books On AI Released In 2021 - Analytics India Magazine

Posted in Superintelligence | Comments Off on Top Books On AI Released In 2021 – Analytics India Magazine

Inside the MIT camp teaching kids to spot bias in code – Popular Science

Posted: at 5:00 am

This story originally appeared in the Youth issue of Popular Science. Current subscribers can access the whole digital editionhere, orclick hereto subscribe.

Li Xin Zhangs summer camp began with sandwichesnot eating them but designing them. The rising seventh grader listened as teachers asked her and her peers to write instructions for building the ideal peanut butter, jelly, and bread concoction. Heads down, the students each created their own how-to.

When they returned to the Zoom matrix of digital faces and told one another about their constructions, they realized something: Each of them had made a slightly different sandwich, favoring the characteristics they held dear. Not necessarily good, not necessarily bad, but definitely not neutral. Their sandwiches were biased. Because they were biased, and they had built the recipe.

The activity was called Best PB&J Algorithm, and Zhang and more than 30 other Boston-area kids between the ages of 10 and 15 were embarking on a two-week initiation into artificial intelligencethe ability of machines to display smarts typically associated with the human brain. Over the course of 18 lessons, they would focus on the ethics embedded in the algorithms that snake through their lives, influencing their entertainment, their social lives, and, to a large degree, their view of the world. Also, in this case, their sandwiches.

Everybodys version of best is different, says Daniella DiPaola, a graduate student at Massachusetts Institute of Technology who helped develop the series of lessons, which is called Everyday AI. Some can be the most sugary, or theyre optimizing for an allergy, or they dont want crust. Zhang put her food in the oven for a warm snack. A parents code might take cost into account.

A pricey PB&J is low on the worlds list of concerns. But given a familiar, nutrient-rich example, the campers could squint at bias and discern how it might creep into other algorithms. Take, for example, facial recognition software, which Boston banned in 2020: This code, which the citys police department potentially could have deployed, matches anyone caught on camera to databases of known faces. But such software in general is notoriously inaccurate at identifying people of color and performs worse on womens faces than on mensboth of which lead to false matches. A 2019 study by the National Institute of Standards and Technology used 189 algorithms from 99 developers to analyze images of 8.49 million people worldwide. The report found that false positives were uniformly more common for women and up to 100 times more likely among West and East African and East Asian people than among Eastern Europeans, which had the lowest rate. Looking at a domestic database of mug shots, the rate was highest for American Indians and elevated for Black and Asian populations.

The kids algorithms showed how preference creeps in, even in benign ways. Our values are embedded in our peanut butter and jelly sandwiches, DiPaola says.

The camp doesnt aim to depress students with the realization that AI isnt all-knowing and neutral. Instead, it gives them the tools to understand, and perhaps change, the technologys influenceas the AI creators, consumers, voters, and regulators of the future.

To accomplish that, instructors based their lessons on an initiative called DAILy (Developing AI Literacy), shaped over the past few years by MIT educators, grad students, and researchers, including DiPaola. It introduces middle schoolers to the technical, creative, and ethical implications of AI, taking them from building PB&Js to totally redesigning YouTubes recommendation algorithm. For the project, MIT partnered with an organization called STEAM Ahead, a nonprofit whose mission is to create educational opportunities for Boston-area kids from groups traditionally underrepresented in scientific, technical, and artistic fields. They did a trial run in 2020, then repeated the curriculum in 2021 for Everyday AI, expanding the camp to include middle-school teachers. The goal is for educators across the country to be able to easily download the course and implement it.

DAILy is designed to enable average people to be better informed about AI. I knew that AI was pretty helpful for humans, and it might be a huge part of our life, Zhang says, reflecting on what shed learned. When she started, she says, I just knew a little bit, not a lot. Coding was totally new to her.

DAILys creators and instructors are at the forefront of a movement to bake ethics into the development process, as opposed to its being an afterthought once the code is complete. The program isnt unique, though others like it are hardly widespread. Grassroots efforts range from a middle-school ethics offering in Indiana called AI Goes Rural to the website Explore AI Ethics, started for teachers by a Minnesota programmer. The National Science Foundation (NSF) recently funded a high-school program called TechHive AI that covers cybersecurity and AI ethics.

[Related: An AI finished Beethovens last symphony. Is it any good?]

Historically, ethics hasnt been incorporated into technical AI education. Its something that has been lacking, says Fred Martin, professor and associate dean for teaching, learning and undergraduate studies at the University of Massachusetts Lowell. In 2018, Martin co-founded the AI4K12 initiative, which produced guidelines for teaching AI in K12 schools. We conceived of what we call five big ideas of AI, and the fifth is all about ethics. Hes since seen AI ethics education expand and reach younger students, as evidenced by AI4K12s growing database of resources.

The directory links to MIT offerings, including DAILy. Ethics is front and center in their work, Martin says. Its important that kids begin learning about it early so they can be informed citizens.

At the Everyday AI workshop, the hope is that students will feel empowered. You do have agency, says Wesley Davis, a instructor at the 2020 pilot camp. You have the agency to understand. You have the agency to explore that curiosity, down to creating a better system, creating a better world.

Thats a little flowery-philosophical, he laughs. But that peculiar mix of idealism and cynicism is the specialty of teenagers. And so when asked if she thought she could, someday, make AI better than todays, Zhang gave a resounding Maybe.

DAILy began as a way to right a wrong. Blakeley Payne (ne Hoffman), a computer science major at the University of South Carolina, was hanging out in 2015 with her best friend, who had just applied for a job at Twitter. The rejection came back in a blink. How could the company possibly have decided so quickly that she wasnt a good fit? They posited that perhaps an algorithm had made the decision based on specific keywords. Mad, Payne began reading up on research about bias in, and the resulting inequities caused by, AI.

Since Paynes experience, AI partiality in hiring has become a famously huge problem. Amazon, for instance, made headlines in 2018 when Reuters reported that the companys recruitment engine discriminated against womenknocking out rsums with that keyword (as in womens chess club captain) and penalizing applicants for having gone to womens colleges. Turns out developers had trained their algorithm using rsums submitted to the company over a 10-year period, according to Reuters, most of which had come from men. A 2021 paper in International Journal of Selection and Assessment found that people largely rate a humans hiring judgment as more fair than an algorithms, though they often perceive automation to be more consistent.

At first, the whole situation soured Payne on her field. Ultimately, though, she decided to try to improve the situation. When she graduated in 2017, she enrolled at MIT as a graduate student to focus on AI ethics and the demographic where education could make the most difference: middle-school students. Kids this age are often labeled AI natives. Theyve never not known the tech, are old enough to consider its complications, and will grow up to make the next versions.

Over the next couple of years, Payne developed one of the first AI ethics curricula for middle graders, and her masters thesis helped inform another set of interactive lessons, called How to Train Your Robot. When she graduated in 2020 and went on to do research for the University of Colorado, Boulder, MIT scholars like DiPaola continued and expanded her efforts.

[Related: Do we trust robots enough to put them in charge?]

Paynes projects helped lay the groundwork for the larger-scale DAILy program, funded by the NSF in March 2020. DAILy is a collaboration among the MIT Scheller Teacher Education Program (STEP), Boston College, and the Personal Robots Group at the MIT Media Lab, an interdisciplinary center where DiPaola works. A second NSF grant, in March 2021, funds a training program to help teachers use DAILy in their classrooms. By forging partnerships with districts in Florida, Illinois, New Mexico, and Virginia and with youth-education nonprofits like STEAM Ahead, the MIT educators are able to see how their ivory-tower lessons play out. The proving ground for any curriculum is in the real classroom and in summer camps, says DiPaola.

When those kidsand many adults, eventhink of AI, one thing usually comes to mind: robots. Robots from the future, killer robots that will take over the world, superintelligence, says DiPaola. It was a big shock to them that AI is actually in the technologies they use every single day.

Teachers have often told the STEP Labs Irene Lee, who oversees the grants, that they didnt realize AI was being deployed. They thought it was an abstraction in labs. Deployed?! Lee says to them. Youre immersed in it!

Its in smart speakers. It recommends a Netflix film to chill to. It suggests new shoes. It helps give the yea or nay on bank loans. Companies weed out job applicants with it; schools use it to grade papers. Perhaps most importantly to the summer-camp students, it powers apps like TikTok and whatever meme-bending video the platform surfaces.

They know that when theyre looking at cat-mischief TikToks, theyll get recommendations for similar ones, and that their infinite scroll of videos is different from their friends. But they dont usually realize that those results are AIs doing. I didnt know all these facts, says Zhang.

Soham Patil, one of her camp-mates, agrees. A rising eighth grader, hed been studying how AI works and writing software recreationally for a few months before the program. I kind of knew how to code, but I didnt really know the practical uses of AI, Patil says. I knew how to use it but not what its for.

Patil, Zhang, and their peers next activity involved a different food group: noodles. They saw on their screens a member of a strange royal familya cat wearing a tiara, with hearts for eyes.

There is a land of pasta known for most excellent cuisine with a queen who wants to classify all the dry pasta in her land and store them in bins, reads the lesson. YOU, as a subject in PastaLand, are tasked with building a classification system that can be used to describe and classify the pasta so the pasta can easily be found when the queen wants a certain dish.

Ethics of monarchy aside, the students goal was to develop an identification system called a decision tree, which arrives at classification by using a series of questions to sort objects based on their characteristics, first into two groups, then each of those into two more groups, then each of those into two more, until there is only one kind of object left in each group. For pasta, STEP Labs Lee explains, The first question could be, Is it long? Is it curly? Does it have ridges? Is it a tube? Zhangs team started with Is it round? Is it long? and Is it short?

As before, though, when the kids reassembled, they realized their questions were all different: Some might ask whether a piece of pasta can hold a lot of sauce or only a little. Another might separate types based on whether theyre meant to be stuffed or not. Patil noticed that some kids would try to separate the unclassified pasta into two roughly equal groups at every juncture.

Could someone who is blind follow their key? the teachers asked. What about the subjectivity in simply determining what long is? Even pasta was influenced by culture, experience, and ability. The students then extended this realizationthat its easy to bake in bias, exclude people, or misread your opinions as objectiveto higher-stakes situations. Predictive policing is an example. The technology uses past crime data to forecast which areas are high risk or who is purportedly most likely to offend. But any AI that uses legacy data to predict the future is liable to reinforce past prejudices. A 2019 New York University Law Review paper looked at case studies in Illinois, Arizona, and Louisiana and noted that a failure to reform such systems risks creating lasting consequences that will permeate throughout the criminal justice system and society more widely.

[Related: How Googles newest tool could change how you search online]

The students could see, again, how AI-based choices affect outputs. They can know, If I design it this way, these people will be impacted positively, these people will be impacted negatively, says DiPaola. They can ask themselves, How do I make sure the most vulnerable people are not harmed?

AI developers find themselves grappling with these questions more frequently, in part because their work now touches so many aspects of peoples lives. The biases in their code are largely societys own. Take recommendation algorithms like YouTubes, which former Google developer Guillaume Chaslot asserts drive viewers toward more sensationalistic, more divisive, often misinformational videosto keep more people watching longer and attract advertising. Such a choice arguably favors profits over impartiality.

By teaching kids early what ethical AI looks like, how unfairness gets in there, and how to work around it, educators hope to enable them to recognize that unfairness when it occurs and devise strategies to correct the problem. Ethics has been taught either as a completely separate course or in the last two or three lessons of a semester course, says DiPaola. That, she says, conveys an implicit lesson: Ethics doesnt need to be thought of at the same time as youre actually building something, or ethics is kind of an afterthought.

Better integration of ethics is important to Denise Dreher, a database programmer who recently retired from the IT department of St. Paul, Minnesotas Macalester College. As a personal project, she has been cataloging curricula like DAILy and making the K-12 lessons available on her website, Explore AI Ethics, for teachers to use in the classroom. She believes that AI education should look more like engineering instruction. Theres a long and very good tradition of safety and ethics for engineer training, she says, because its a profession, one with a codified career path. You cant just go build a bridge, or get through bridge-building school without having to work through the implications of your bridge.

AI? she continues. Any 10-year-old in your basement can do it.

As camp progressed, the ethical questions grew bigger, as did the technology that students dealt with. One day, Mark ZuckerbergCEO of Facebook, a social network largely populated by oldsappeared on their screens. I wish I could keep telling you that our mission in life is connecting people, but it isnt, Zuckerberg said. We just want to predict your future behaviors. The more you express yourself, the more we own you.

That would be an unusually candid speech. And, actually, the whole thing looked a little off. Zuckerbergs eyelids were a little blurrier than the rest of him. And he stared at the camera without blinking for longer than a normal person would. These, instructors pointed out, are tells.

He didnt look like a normal person because he wasnt a normal person. He wasnt even a real person. He was a deepfaked videomorph giving a deeply faked speech. A deepfake is footage or an image produced by an AI after it parses lots of footage or photos of someone. In this case, the software learned how Zuckerberg looks and sounds saying different words in different situations. With that material, it assembled a Zuck that doesnt exist, saying something he never said. Its kind of hard to think how AI could create a video, says Patil.

Zhang, whose preferred social medium is YouTube, watches a lot of videos and already assumed that not all of them are realbut didnt have any tools to parse truth from fiction till this course.

The campers had all likely encountered AI-based fakery before. An app called Reface, for example, lets them switch visages with another persona popular TikTok hobby. FaceTune conforms selfies to conventional European standards of beauty, bleaching teeth, slimming noses, pouting up lips. But they cant always tell when someone else has been tuned. They may just think that so-and-so just had a good complexion day.

In fake visual media, the real and syntheticthe human and the AIhave two faces that look nearly identical. When the kids fully grasp that, Its a moment where shit gets real, so to speak, says Gabi Souza, who worked at the camp both summers. They know that you cant trust everything you see, and thats important to know, especially in our world of so much falsehood so widely propagated. They at least know to question whats presented.

Not all lessons went over so well. There are a couple of activities that even in person would be scratching at the top level of comprehension, says instructor Davis. Patil, for instance, had a hard time understanding the details of neural networks, software inspired by the brains interconnected neurons. The goal of the code is to recognize patterns in a dataset and use those patterns to make predictions. In astronomy, for instance, such programs can learn to predict what type of galaxy is shining in a telescope picture. At camp, the kids acted like the nodes of a neural network to predict the caption for a photo of a squirrel water-skiing in a pool. It worked kind of like a game of telephone: Teachers showed the picture to several students, who wrote down keywords describing it, and then each passed a single word on to students who hadnt seen the image. Those kids each picked two words to pass to a final camper, who chose four words for the caption. For the nodes, understanding their role in that network, and transposing that onto software, was hard.

But even with the activities that didnt melt youthful brains, how well a lesson went depended on how many students had breakfast this morning, is it Monday or is it Thursday afternoon, says Davis. It wasnt all canoes and archery, like traditional camps. Its a lot of work, says Zhang.

Making AI education accessible, and diversely implemented, is more complicated than teaching it in person to private-school kids who get MacBook Pros. While the collaboration partners had always planned to make the curriculum virtual to make it more accessible, the pandemic sped up that timeline and highlighted where they needed to improve, like by making sure that the activities would work across different platforms and devices.

[Related: The Pentagons plan to make AI trustworthy]

Then there are complications with the Media Labs involvement. The organization came under fire in 2019 for taking money and ostensible cultural cachet from convicted sex offender Jeffrey Epstein, which led to the departure of the labs director. Writer Evgeny Morozov, who researches the social and political implications of technology, pointed out in the Guardian that the third culture promoted by organizations like the labwhere scientists and technologists represent societys foremost deep thinkersis a perfect shield for pursuing entrepreneurial activities under the banner of intellectualism. Perhaps you could apply that criticism to Personal Robots director Cynthia Breazeal, whose company garnered around $70 million in funding between 2014 and 2016 for a social robot named Jibo that would help usher in a new era of human-machine interaction. The story had an unhappy ending: delayed shipments, dissatisfied customers, layoffs, a sell-off of intellectual property, and no real revolution.

But those too are perhaps good lessons for students to learn while theyre young. Flashy, fancy things can disappoint in myriad ways, and even places that teach ethics early can nevertheless have lapses of their own. And maybe that shouldnt be so surprising: After all, the problems with AI are just human problems, de-personified.

The seamy undersilicon of AIits discrimination, its invasiveness, its deceptiondidnt, though, discourage campers from wanting to join the field, as both Zhang and Patil are considering.

And now they know that, more likely than not, no matter what job they apply for, an algorithm will help determine if theyre worthy of it. An algorithm that, someday, they might help rewrite.

This story originally appeared in the Youth issue of Popular Science. Current subscribers can access the whole digital editionhere, orclick hereto subscribe.

Read the original:

Inside the MIT camp teaching kids to spot bias in code - Popular Science

Posted in Superintelligence | Comments Off on Inside the MIT camp teaching kids to spot bias in code – Popular Science

7 Types Of Artificial Intelligence – Forbes

Posted: November 17, 2021 at 12:54 pm

Artificial Intelligence is probably the most complex and astounding creations of humanity yet. And that is disregarding the fact that the field remains largely unexplored, which means that every amazing AI application that we see today represents merely the tip of the AI iceberg, as it were. While this fact may have been stated and restated numerous times, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. The reason for this is the revolutionary impact that AI is having on society, even at such a relatively early stage in its evolution.

AIs rapid growth and powerful capabilities have made people paranoid about the inevitability and proximity of an AI takeover. Also, the transformation brought about by AI in different industries has made business leaders and the mainstream public think that we are close to achieving the peak of AI research and maxing out AIs potential. However, understanding the types of AI that are possible and the types that exist now will give a clearer picture of existing AI capabilities and the long road ahead for AI research.

Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type.

Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to think and perhaps even feel like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.

These are the oldest forms of AI systems that have extremely limited capability. They emulate the human minds ability to respond to different kinds of stimuli. These machines do not have memory-based functionality. This means such machines cannot use previously gained experiences to inform their present actions, i.e., these machines do not have the ability to learn. These machines could only be used for automatically responding to a limited set or combination of inputs. They cannot be used to rely on memory to improve their operations based on the same. A popular example of a reactive AI machine is IBMs Deep Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.

Limited memory machines are machines that, in addition to having the capabilities of purely reactive machines, are also capable of learning from historical data to make decisions. Nearly all existing applications that we know of come under this category of AI. All present-day AI systems, such as those using deep learning, are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems. For instance, an image recognition AI is trained using thousands of pictures and their labels to teach it to name objects it scans. When an image is scanned by such an AI, it uses the training images as references to understand the contents of the image presented to it, and based on its learning experience it labels new images with increasing accuracy.

Almost all present-day AI applications, from chatbots and virtual assistants to self-driving vehicles are all driven by limited memory AI.

While the previous two types of AI have been and are found in abundance, the next two types of AI exist, for now, either as a concept or a work in progress. Theory of mind AI is the next level of AI systems that researchers are currently engaged in innovating. A theory of mind level AI will be able to better understand the entities it is interacting with by discerning their needs, emotions, beliefs, and thought processes. While artificial emotional intelligence is already a budding industry and an area of interest for leading AI researchers, achieving Theory of mind level of AI will require development in other branches of AI as well. This is because to truly understand human needs, AI machines will have to perceive humans as individuals whose minds can be shaped by multiple factors, essentially understanding humans.

This is the final stage of AI development which currently exists only hypothetically. Self-aware AI, which, self explanatorily, is an AI that has evolved to be so akin to the human brain that it has developed self-awareness. Creating this type of Ai, which is decades, if not centuries away from materializing, is and will always be the ultimate objective of all AI research. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own. And this is the type of AI that doomsayers of the technology are wary of. Although the development of self-aware can potentially boost our progress as a civilization by leaps and bounds, it can also potentially lead to catastrophe. This is because once self-aware, the AI would be capable of having ideas like self-preservation which may directly or indirectly spell the end for humanity, as such an entity could easily outmaneuver the intellect of any human being and plot elaborate schemes to take over humanity.

The alternate system of classification that is more generally used in tech parlance is the classification of the technology into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).

This type of artificial intelligence represents all the existing AI, including even the most complicated and capable AI that has ever been created to date. Artificial narrow intelligence refers to AI systems that can only perform a specific task autonomously using human-like capabilities. These machines can do nothing more than what they are programmed to do, and thus have a very limited or narrow range of competencies. According to the aforementioned system of classification, these systems correspond to all the reactive and limited memory AI. Even the most complex AI that uses machine learning and deep learning to teach itself falls under ANI.

Artificial General Intelligence is the ability of an AI agent to learn, perceive, understand, and function completely like a human being. These systems will be able to independently build multiple competencies and form connections and generalizations across domains, massively cutting down on time needed for training. This will make AI systems just as capable as humans by replicating our multi-functional capabilities.

The development of Artificial Superintelligence will probably mark the pinnacle of AI research, as AGI will become by far the most capable forms of intelligence on earth. ASI, in addition to replicating the multi-faceted intelligence of human beings, will be exceedingly better at everything they do because of overwhelmingly greater memory, faster data processing and analysis, and decision-making capabilities. The development of AGI and ASI will lead to a scenario most popularly referred to as the singularity. And while the potential of having such powerful machines at our disposal seems appealing, these machines may also threaten our existence or at the very least, our way of life.

At this point, it is hard to picture the state of our world when more advanced types of AI come into being. However, it is clear that there is a long way to get there as the current state of AI development compared to where it is projected to go is still in its rudimentary stage. For those holding a negative outlook for the future of AI, this means that now is a little too soon to be worrying about the singularity, and there's still time to ensure AI safety. And for those who are optimistic about the future of AI, the fact that we've merely scratched the surface of AI development makes the future even more exciting.

See the article here:

7 Types Of Artificial Intelligence - Forbes

Posted in Superintelligence | Comments Off on 7 Types Of Artificial Intelligence – Forbes

The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW – TVweb

Posted: at 12:54 pm

The CW has recently released the official poster for the upcoming eighth season premiere of The Flash, which will kick off a five-part event following the Armageddon storyline. Armageddon is the official name of the upcoming seventh annual Arrowverse crossover event on the CW. The storyline will focus on The Flash, along with a number of his closest allies, as they go up against Despero (Tony Curran). You can check out the recently released poster for The Flash: Armageddon below.

The upcoming crossover event will feature multiple appearances by Javicia Leslie as Batwoman, Brandon Routh as The Atom, Cress Williams as Black Lightning, Chyler Leigh as Sentinel, Kat McNamara as Mia Queen and Osric Chau as Ryan Choi. The event, will kick off on November 16 at 8 p.m. ET, along with returning villains Eobard Thawne (Tom Cavanagh) and Damien Darhk (Neal McDonough). This event will mark the return of Osric Chau in the Arrowverse since his last appearance on Crisis on Infinite Earths. Also returning to make an appearance will be Kat McNamara, Chyler Leigh, and Cress Williams since the end of Arrow, Supergirl, and Black Lightning, respectively.

The official synopsis for The Flash: Armageddon reads as, picking up six months after the conclusion of season 7, a powerful alien threat arrives on Earth under mysterious circumstances and Barry Allen (Grant Gustin), Iris (Candice Patton) and the rest of Team Flash are pushed to their limits in a desperate battle to save the world. But with time running out, and the fate of humanity at stake, Flash and his companions will also need to enlist the help of some old friends if the forces of good are to prevail.

"Simply put, these are going to be some of the most emotional Flash episodes ever," said The Flash showrunner Eric Wallace said in a statement. "Plus, there are some truly epic moments and huge surprises that await our fans. And we're doing them on a scale that's bigger and bolder than our traditional Flash episodes. So yes, Armageddon is a lot more than just another graphic novel storyline. It's going to be a true event for Flash and Arrowverse fans, old and new. Honestly, I can't wait for audiences to see what we've got planned."

The Armageddon event will be the first crossover where all parts will air as episodes on a single television series, rather than the parts airing as episodes on multiple series. This new approach was taken due to the COVID-19 protocols and restrictions. Unfortunately, it has already been confirmed that Melissa Benoist's Supergirl, and Tyler Hoechlin's Superman will not be making an appearance in the upcoming event. Showrunner Eric Wallace had originally approached both Melissa Benoist and Tyler Hoechlin about appearing in Armageddon, but were unable to do so due to scheduling conflicts and COVID-19 restrictions. It has also been revealed that Carlos Valdes will not be returning as Cisco Ramon in any capacity.

So far The Flash himself has faced some tough villains over the past few years, but if Tony Curran's Despero carries the same alien superintelligence and immense strength as he does in the DC comic book universe, he will most likely be one the fiercest foes Barry Allen has battled yet. The Flash airs Tuesdays at 8 p.m. ET/PT on The CW, beginning on November 16 with the start of Armageddon. Both Ray and Nora had also recently appeared in the hundredth episode of DC's Legends of Tomorrow. Currently, DC's Legends of Tomorrow and Batwoman have already begun airing their respective seasons.

Topics: Flash, Arrowverse

Follow this link:

The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW - TVweb

Posted in Superintelligence | Comments Off on The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW – TVweb

Nick Bostrom – Wikipedia

Posted: November 15, 2021 at 11:51 pm

Swedish philosopher and author

Nick Bostrom ( BOST-rm; Swedish: Niklas Bostrm [nklas bstrm]; born 10 March 1973)[3] is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology,[4] and is the founding director of the Future of Humanity Institute[5] at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.[6][7] Bostrom has been highly influential in the emergence of concern about A.I. in the Rationalist community.[8]

Bostrom is the author of over 200 publications,[9] and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002)[10] and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller,[11] was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.[12][13][failed verification] In 2017, he co-signed a list of 23 principles that all A.I. development should follow.[14]

Born as Niklas Bostrm in 1973[15] in Helsingborg, Sweden,[9] he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] He once did some turns on London's stand-up comedy circuit.[9]

He received a B.A. degree in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg in 1994,[16] with a national record-setting undergraduate performance. He then earned an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability.[17] He held a teaching position at Yale University (20002002), and was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[10][18]

Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[19][20] He discusses existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan M. irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[21] and the Fermi paradox.[22][23]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[20]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind.[24] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity.[25] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[25]

Bostrom points to the lack of agreement among most philosophers that A.I. will be human-friendly, and says that the common assumption is that high intelligence would have a "nerdy" unaggressive personality. However, he notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton A.I. being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[26] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be.[27] Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[28] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.[29]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[30] Keeping the A.I. in isolation from the outside world, especially the internet, humans preprogram the A.I. so it always works from basic principles that will keep it under human control. Other safety measures include the A.I. being "boxed", (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[25] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the A.I. attains superintelligence in some domains. The superintelligent power of the A.I. goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The A.I. manipulates humans into implementing modifications to itself that are ostensibly for augmenting its feigned, modest capabilities, but will actually function to free the superintelligence from its "boxed" isolation (the 'treacherous turn").[31]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the superintelligence mobilizes resources to further a takeover plan. Bostrom emphasizes that planning by a superintelligence will not be so stupid that humans could detect actual weaknesses in it.[32]

Although he canvasses disruption of international economic, political and military stability, including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for the superintelligence to use would be a coup de main with weapons several generations more advanced than current state-of-the-art. He suggests nano-factories covertly distributed at undetectable concentrations in every square metre of the globe to produce a world-wide flood of human-killing devices on command.[30][33] Once a superintelligence has achieved world domination (a 'singleton'), humanity would be relevant only as resources for the achievement of the A.I.'s objectives ("Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[34]

To counter or mitigate an A.I. achieving unified technological global supremacy, Bostrom cites revisiting the Baruch Plan[35] in support of a treaty-based solution[36] and advocates strategies like monitoring[37] and greater international collaboration between A.I. teams[38] in order to improve safety and reduce the risks from the A.I. arms race. He recommends various control methods, including limiting the specifications of A.I.s to e.g., oracular or tool-like (expert system) functions[39] and loading the A.I. with values, for instance by associative value accretion or value learning, e.g., by using the Hail Mary technique (programming an A.I. to estimate what other postulated cosmological superintelligences might want) or the Christiano utility function approach (mathematically defined human mind combined with well specified virtual environment).[40] To choose criteria for value loading, Bostrom adopts an indirect normativity approach and considers Yudkowsky's[41] coherent extrapolated volition concept, as well as moral rightness and forms of decision theory.[42]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of A.I.[43] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[44] Cutting-edge A.I. researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'.[45] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of A.I.[14] Hassabis suggested the main safety measure would be an agreement for whichever A.I. research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[46] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[47]

In 1863 Samuel Butler's essay "Darwin among the Machines" predicted the domination of humanity by intelligent machines, but Bostrom's suggestion of deliberate massacre of all humanity is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike".[33] As given in his later book, From Bacteria to Bach and Back, philosopher Daniel Dennett's views remain in contradistinction to those of Bostrom.[48] Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is "possible in principle" to create "strong A.I." with human-like comprehension and agency, but maintains that the difficulties of any such "strong A.I." project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[49] Dennett thinks the only relevant danger from A.I. systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension.[50] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist James Lovelock has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humanity will happen within the foreseeable future.[51][52]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[53]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[54] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[55][56]

Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[57][58] as well as a critic of bio-conservative views.[59]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[57] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[60]

With philosopher Toby Ord, he proposed the reversal test. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[61]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[62][63]

Bostrom's theory of the Unilateralist's Curse[64] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[65]

Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[66] He is an advisory board member for the Machine Intelligence Research Institute,[67] Future of Life Institute,[68] Foundational Questions Institute[69] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[70][71]

In response to Bostrom's writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, "predictions that superintelligence is on the foreseeable horizon are not supported by the available data."[72] Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.[73]

Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.[74]

See more here:

Nick Bostrom - Wikipedia

Posted in Superintelligence | Comments Off on Nick Bostrom – Wikipedia

Inside the Impact on Marvel of Brian Tyree Henry’s Openly Gay Character in ‘Eternals’ – Black Girl Nerds

Posted: November 13, 2021 at 10:53 am

Over the years, Marvel movies havent always shed a lot of light on LGBTQ characters the way the original comic books seem to do. Its about time Marvel started providing more LGBTQ representation, and it seems we will definitely be seeing a lot of that for the first time in Eternals with Brian Tyree Henrys openly gay character.

Valkyrie is another queer character who identifies as bisexual, but Marvel movies wont focus on that until Thor: Love and Thunder is released. According to Marvel writer Al Ewing via Bleeding Cool, Loki is another Marvel character whos bisexual and gender fluid. Its something the writer plans to touch on with Loki shifting between genders on occasion.

The list goes on because its also been revealed both the Kronan warrior Korg is another gay Marvel character. And, its pretty obvious that Black Panthers Okoye is attracted to women based on her original comic book series from 2016. Now that we know LGBTQ representation has its place in the Marvel universe, heres what you should know about Brian Tyree Henrys Eternals character.

The truth about Phastos, Brian Tyree Henrys Eternals character, is that hes not one of the first characters from the original team. Jack Kirby wrote and released the earliest issues of Eternals in 1976. If youre checking through those, youll most definitely not find Phastos. He doesnt get introduced to the rest of the superhuman team until the third generation.

The first time Phastos appears is in the 1985 issue created by Sal Buscema and Peter B. Gillis. Even though Phastos wasnt part of the original team, hes still very much part of the Eternals with the rest of his superhero squad. When you take into account the fact that the Eternals are a race of near-immortal beings created by the Celestials deep into history, he definitely counts as being one of them.

The fact that Phastos will be the first openly gay character in the MCU is huge news, but what makes it even more exciting is the fact that hell have a husband and family in the film. The man playing Phastos husband in the movie will be Haaz Sleiman, an openly gay actor who you might recognize from the Apple+ TV series Little America. Back in 2007, he also starred in a movie called The Visitor.

Sleiman confirmed via Cinema Blend that there will be a moving kiss shared between his character and Phastos sometime in the film, which is a very big deal. Plenty of TV shows and movies dance around the topic of LGBTQ representation by including queer couples but failing to allow those couples to share any intimacy on screen. In Eternals, Marvel filmmakers are obviously going to avoid making that same mistake.

At this point, Brian Tyree Henry must be fully aware that the role hes playing in Eternals is a big deal in 2021. The Hollywood industry is making huge strides to show respect to the LGBTQ community, and Henry taking on this role is helping us move in the right direction as a society.

He discussed what it feels like playing Phastos to Murphys Multiverse, saying, The thing that really attracted me to this part was that I just think about all the images of Black men out there and how we are portrayed. And what I love the most about Phastos is that one, hes an ancestor. All of us are ancestors technically, so Phastos predates everything and had to probably go through all these things that could actually make someone lose faith in humanity very quickly. While Phastos has many reasons to lose faith, he is somehow able to hold onto it, use his superpowers, and push forward.

When it comes to keeping up with his super-strong counterparts, Phastos is not one to mess with. His powers include super-strength, flight, expert knowledge in technology, and energy manipulation.

He brings a lot to the table, and he is someone the rest of his team can depend on when battling against their enemies. Another epic detail about Phastos is the fact that hes a skilled weapons maker. Hes able to come up with some of the most intelligent gear for himself and his team.

Seeing Brian Tyree Henry take on the role of Phastos in Eternals is going to be huge for his acting career, but this isnt his first rodeo. Henry has already starred in a fair share of awesome roles in the past.

Some of the other places youll recognize him from include Atlanta, Childs Play, Godzilla vs. Kong, The Outside Story, Superintelligence, Widows, and Dont Let Go. He also had parts in If Beale Street Could Talk, White Boy Rick, Joker, and several more.

See the rest here:

Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds

Posted in Superintelligence | Comments Off on Inside the Impact on Marvel of Brian Tyree Henry’s Openly Gay Character in ‘Eternals’ – Black Girl Nerds

Cowboy Bebop; Ein dogs were really spoiled on set – Dog of the Day

Posted: at 10:53 am

. TheCowboy Bebopdogs, Charlie and Harry, who play Ein thePembroke Welsh Corgi, were a pretty big hit with their fellow castmates and crew members on the live-action Netflix series.

Due to long filming hours, two nearly identical (but not apparently related) dogs from New Zealand share the role of the data dog from the futuristic space Western, much like Mary-Kate and Ashley Olsen shared the role of Michelle Tanner onFull Houseor Dylan and Cole Sprouse onGrace Under Fire.

If you arent familiar withCowboy Bebop,its a space Western set in the early 2070s, combining sci-fi future tech with old-school jazz, and it almost certainly inspired Joss Whedons creation of cult classicFirefly,though this has never officially been proven. A ragtag group of bounty hunters gradually become a found family through the lone season, enduring many adventures and harrowing situations.

(While theFireflyverse didnt include any canines on the show, a black market forBeagles does exist, according to Nathan Filions Mal Reynolds.)

Actress Danielle Pineda, who plays bounty hunter Faye Valentine, hinted that she and her fellow humans may have spoiled Charlie and Harry on set with extra treats, because as actor John Chu (Spike Spiegel) notes, Dogs are showstoppers. Everybody just stops working, and its cuddle time.

Theyre both fantastic, showrunner Andre Nemec told Entertainment Weekly. Theyre little kings of the set. Everybody wants time with the corgis.

In the original 1998 Japanese anime series, Ein was genetically modified to have superintelligence (include being able to speak human languages and play chess), escaping from dognappers and becoming part of the crew of bounty hunters on the spaceshipBebop.

His exact role and storylines are currently unknown, but it seems quite likely that he will steal all the attention from the first season of the Netflix series, which drops on Friday, November 19. (The original anime can be viewed for free on Tubi.)

One of the Corgis was used for most scenes requiring direction, while the other was used more often when Ein needed to be held, and all those extra treats added significantly to their weight, much to the surprise (and mild discomfort) of the humans holding them during scenes.

If you wanted to get cool with the dog, just have some chicken in your hand, otherwise hes not very interested in you at all, actor Mustafa Shakir, who plays Spikes bounty hunter partner Jet Black, told EW, adding that they were super cute, which almost goes without saying.

And all that chicken meant that drool often occurred, too which meant that the makeup team would sometimes have to wipe down Pinedas arms between takes while filming Ein scenes.

For more news, opinions and analysis on all things Netflix and streaming in general within the FanSided Network, be sure to check out our sister siteNetflix Life to stay up to date on all the latest.

See more here:

Cowboy Bebop; Ein dogs were really spoiled on set - Dog of the Day

Posted in Superintelligence | Comments Off on Cowboy Bebop; Ein dogs were really spoiled on set – Dog of the Day

The funny formula: Why machine-generated humor is the holy grail of A.I. – Digital Trends

Posted: at 10:53 am

In The Outrageous Okona, the fourth episode of the second season of Star Trek: The Next Generation, the Enterprises resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ships Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

How tough? Forget Go, Jeopardy!, chess, and any number of other impressive demos: According to some experts, building an artificial intelligence on the level of a top comedian may be the true measure of machine intelligence.

And, while were not there yet, its safe to say that we may be getting a whole lot closer.

Joe Toplyn is someone who doesnt shy away from challenges. Toplyn, an engineer by training (with a large career gap in terms of actually practicing it), carved out a successful career for himself as a TV writer. A four-time Emmy winner, hes been a head writer for the likes of David Letterman and Jay Leno. Several years ago, Toplyn became interested in the question of whether or not there is an algorithm (i.e., a process or set of rules that can be followed) that would help write genuinely funny jokes.

People think its magic, he told Digital Trends. Some comedy writers or comedians, I think, try to portray what they do as performing magic. Well, it is like magic in the sense that a magic trick is constructed and designed, and theres a way that it works that fools you into thinking that the magician has supernatural powers. But theres really a logic to it.

This belief in a steely logic to joke-telling honed while Toplyn was trying to teach his magic to aspiring, would-be comedians ultimately led him to try building an A.I. able to generate off-the-cuff quips that fit into regular conversations. Called Witscript, the results add up to an innovative A.I. system that creates improvised jokes. A chatbot that uses Witscript to ad-lib jokes could, Toplyn said, help create likable artificial companions to help solve the huge problem of human loneliness. Think of it like PARO the robot seal with punch lines.

Its context-relevant, Toplyn said of Witscript, which was recently presented at the 12th International Conference on Computational Creativity (ICCC 2021). This sets it apart from other joke-generating systems that generate self-contained jokes that arent easy to integrate into a conversation. When youre talking with a witty friend, chances are that their jokes will be integrated into a conversation in response to something youve said. Its much less likely that your friend will just start telling a stand-alone joke like, A man walks into a bar with a duck on his head

This spontaneous quality comes from the joke-writing algorithms Toplyn himself developed.

Basically, the way the basic joke-writing algorithm works is this: It starts by selecting a topic for the joke, which could be a sentence that somebody says to you or the topic of a news story, he said. The next step is to select what I call two topic handles, the words or phrases in the topic that are the most responsible for capturing the audiences attention. The third step is to generate associations of the two topic handles. Associations are what the audience is likely to think of when they think about a particular subject. The fourth step is to create a punch line, which links an association of one of the two topic handles to an association of the other in a surprising way. The last step is to generate an angle between the topic and the punch line: A sentence or phrase that connects the topic to the punch line in a natural-sounding way.

If all these handles and angles sound like hard work, the proof is ultimately in the pudding. Using 13 input topics, Witscript generated a series of jokes, which Toplyn then pitted against his own efforts. For a review board, he outsourced the judging to Amazon Mechanical Turk workers, who graded each freshly minted joke on a scale of one (not a joke) to four (a very good joke). One of Witscripts best efforts garnered a 2.87 rating (Thats pretty close to being a joke, Toplyn said) to his own 2.80 as student beat master. The Witscript joke? Riffing on a line about the 25th anniversary of the Blue Man Group performance art company, it quipped: Welcome to the Bluebilee.

While perhaps not quite yet ready to displace Dave Chappelle, Toplyn believes that Witscript proves that humor can, to a degree, be automated. Even if theres still a long way to go.As machines get better at executing those algorithms, the jokes they generate will get better, he said.

However, he also struck a note of caution. To generate [truly] sophisticated jokes the way an expert human comedy writer can, machines will need the common-sense knowledge and common-sense reasoning ability of a typical human.

This, as it turns out, may be the crux of the matter. Humor might seem frivolous, but for those who work in the fields of language, comedy, and artificial intelligence, its anything but.

We use humor in a lot of different ways, Kim Binsted, a professor in the Information and Computer Sciences Department at the University of Hawaii, told Digital Trends. We use it to establish social rapport. We use it to define in-groups and out-groups. We use it to introduce ideas that we might not be willing to express seriously. Obviously, theres nonlinguistic humor, but [linguistic humor] falls into a category of language use that is really powerful. It isnt just a stand-up on stage who uses it to get a few laughs. Its something that we use all the time [within our society.]

It is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.

When it comes to computational humor, Binsted is a pioneer. In the 1990s, she created one of (possibly the) first A.I. designed to generate jokes. Developed with Professor Graeme Ritchie, Binsteds JAPE (Joke Analysis and Production Engine) was a joke-generating bot that could create question-and-answer puns. An example might be: Q) What do you call a strange market? A) A bizarre bazaar.

It was great because it meant I could pick all the low-hanging fruit before anyone else, she said modestly. Which is pretty much what I did with puns.

Since then, Binsted has developed various other computational humor bots including one able to dream up variations on Yo mama jokes. While Binsteds work has since evolved to look at long-duration human space exploration, she still views joke-telling A.I. as a sort of holy grail for machine intelligence.

Its not one of these things like chess, where when A.I. was starting out, people said, Well, if a computer can ever really play chess, then we will know its fully intelligent, she opined. Obviously, thats not the case. But I do think humor is one of those things where fluent humor using a computer is going to have to be genuinely intelligent in other ways as well.

This is why joke-telling is such an interesting challenge for machines. Its not because making an A.I. crack wise is as useful to humanity as, say, using machine intelligence to solve cancer. But it is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.

Humor depends on many different human skills, such as world knowledge, linguistic abilities, reasoning, [and more], Thomas Winters, a computer science Ph.D. student researching artificial intelligence and computational humor, told Digital Trends. Even if a machine has access to that kind of information and skills, it still has to have insight into the difficulty of the joke itself. In order for something to be funny, a joke also has to be not too easy and not too hard for a human to understand. A machine generating jokes should not use too obscure knowledge, nor too obvious knowledge with predictable punch lines. This is why computational humor is usually seen as an A.I.-complete problem. [It means] we need to have A.I that has functionally similar components as a human brain to solve computational humor, due to its dependency on all these skills of the human brain.

Think of it like a Turing Test with a laugh track. Coming soon to a superintelligence near you. Hopefully.

View post:

The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends

Posted in Superintelligence | Comments Off on The funny formula: Why machine-generated humor is the holy grail of A.I. – Digital Trends

Page 8«..78910..20..»