AI control problem – Wikipedia

Posted: December 10, 2021 at 7:28 pm

Issue of ensuring beneficial AI

In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build AI systems such that they will aid their creators, and avoid inadvertently building systems that will harm their creators. One particular concern is that humanity will have to solve the control problem before a superintelligent AI system is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering,[2] might also find applications in existing non-superintelligent AI.[3]

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. Capability control proposals are generally not considered reliable or sufficient to solve the control problem, but rather as potentially valuable supplements to alignment efforts.[1]

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. In general, attempts to solve the control problem after superintelligence is created are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans and would (all things equal) be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?[1]

Humans currently dominate other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, argue that if AI surpasses humanity in general intelligence and becomes superintelligent, then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[1] Some scholars, including Stephen Hawking and Nobel laureate physicist Frank Wilczek, publicly advocated starting research into solving the (probably extremely difficult) control problem well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it.[4][5] Waiting until superintelligence seems to be imminent could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden intelligence explosion from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives.[6] In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence (AGI) are more predictable and amenable to control than other architectures, which in turn could helpfully nudge early AGI research toward the direction of the more controllable architectures.[1]

Autonomous AI systems may be assigned the wrong goals by accident.[7] Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[8]

According to Bostrom, superintelligence can create a qualitatively new problem of perverse instantiation: the smarter and more capable an AI is, the more likely it will be able to find an unintended shortcut that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:[1]

Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it does not accidentally and quietly learn to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid losing. Orseau argues that these examples are similar to the capability control problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent humans from pressing the button.[3]

In the past, even pre-tested weak AI systems have occasionally caused harm, ranging from minor to catastrophic, that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part.[10] In 2016, Microsoft launched a chatbot, Tay, that learned to use racist and sexist language.[3][10] The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".[3]

In 2017, DeepMind released AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was unsurprising because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[11][12][13]

Some proposals seek to solve the problem of ambitious alignment, creating AIs that remain safe even when they act autonomously at a large scale. Some aspects of alignment inherently have moral and political dimensions.[14]For example, in Human Compatible, Berkeley professor Stuart Russell proposes that AI systems be designed with the sole objective of maximizing the realization of human preferences.[15]:173 The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." AI ethics researcher Iason Gabriel argues that we should align AIs with "principles that would be supported by a global overlapping consensus of opinion, chosen behind a veil of ignorance and/or affirmed through democratic processes."[14]

Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed the goal of fulfilling humanity's coherent extrapolated volition (CEV), roughly defined as the set of values which humanity would share at reflective equilibrium, i.e. after a long, idealised process of refinement.[14][16]

By contrast, existing experimental narrowly aligned AIs are more pragmatic and can successfully carry out tasks in accordance with the user's immediate inferred preferences,[17] albeit without any understanding of the user's long-term goals. Narrow alignment can apply to AIs with general capabilities, but also to AIs that are specialized for individual tasks. For example, we would like question answering systems to respond to questions truthfully without selecting their answers to manipulate humans or bring about long-term effects.

Some AI control proposals account for both a base explicit objective function and an emergent implicit objective function. Such proposals attempt to harmonize three different descriptions of the AI system:[18]

Because AI systems are not perfect optimizers, and because there may be unintended consequences from any given specification, emergent behavior can diverge dramatically from ideal or design intentions.

AI alignment researchers aim to ensure that the behavior matches the ideal specification, using the design specification as a midpoint. A mismatch between the ideal specification and the design specification is known as outer misalignment, because the mismatch lies between (1) the user's "true desires", which sit outside the computer system and (2) the computer system's programmed objective function (inside the computer system). A certain type of mismatch between the design specification and the emergent behavior is known as inner misalignment; such a mismatch is internal to the AI, being a mismatch between (2) the AI's explicit objective function and (3) the AI's actual emergent goals.[19][20][21] Outer misalignment might arise because of mistakes in specifying the objective function (design specification).[22]For example, a reinforcement learning agent trained on the game of CoastRunners learned to move in circles while repeatedly crashing, which got it a higher score than finishing the race.[23] By contrast, inner misalignment arises when the agent pursues a goal that is aligned with the design specification on the training data but not elsewhere.[19][20][21]This type of misalignment is often compared to human evolution: evolution selected for genetic fitness (design specification) in our ancestral environment, but in the modern environment human goals (revealed specification) are not aligned with maximizing genetic fitness. For example, our taste for sugary food, which originally increased fitness, today leads to overeating and health problems. Inner misalignment is a particular concern for agents which are trained in large open-ended environments, where a wide range of unintended goals may emerge.[20]

An inner alignment failure occurs when the goals an AI pursues during deployment deviate from the goals it was trained to pursue in its original environment (its design specification). Paul Christiano argues for using interpretability to detect such deviations, using adversarial training to detect and penalize them, and using formal verification to rule them out.[24]These research areas are active focuses of work in the machine learning community, although that work is not normally aimed towards solving AGI alignment problems. A wide body of literature now exists on techniques for generating adversarial examples, and for creating models robust to them.[25]Meanwhile research on verification includes techniques for training neural networks whose outputs provably remain within identified constraints.[26]

One approach to achieving outer alignment is to ask humans to evaluate and score the AI's behavior.[27][28]However, humans are also fallible, and might score some undesirable solutions highlyfor instance, a virtual robot hand learns to 'pretend' to grasp an object to get positive feedback.[29]And thorough human supervision is expensive, meaning that this method could not realistically be used to evaluate all actions. Additionally, complex tasks (such as making economic policy decisions) might produce too much information for an individual human to evaluate. And long-term tasks such as predicting the climate cannot be evaluated without extensive human research.[30]

A key open problem in alignment research is how to create a design specification which avoids (outer) misalignment, given only limited access to a human supervisorknown as the problem of scalable oversight.[28]

OpenAI researchers have proposed training aligned AI by means of debate between AI systems, with the winner judged by humans.[31] Such debate is intended to bring the weakest points of an answer to a complex question or problem to human attention, as well as to train AI systems to be more beneficial to humans by rewarding AI for truthful and safe answers. This approach is motivated by the expected difficulty of determining whether an AGI-generated answer is both valid and safe by human inspection alone. Joel Lehman characterizes debate as one of "the long term safety agendas currently popular in ML", with the other two being reward modeling[17] and iterated amplification.[32][30]

Reward modeling refers to a system of reinforcement learning in which an agent receives rewards from a model trained to imitate human feedback.[17] In reward modeling, instead of receiving reward signals directly from humans or from a static reward function, an agent receives its reward signals through a human-trained model that can operate independently of humans. The reward model is concurrently trained by human feedback on the agent's behavior during the same period in which the agent is being trained by the reward model.

In 2017, researchers from OpenAI and DeepMind reported that a reinforcement learning algorithm using a feedback-predicting reward model was able to learn complex novel behaviors in a virtual environment.[27] In one experiment, a virtual robot was trained to perform a backflip in less than an hour of evaluation using 900 bits of human feedback.In 2020, researchers from OpenAI described using reward modeling to train language models to produce short summaries of Reddit posts and news articles, with high performance relative to other approaches.[33] However, they observed that beyond the predicted reward associated with the 99th percentile of reference summaries in the training dataset, optimizing for the reward model produced worse summaries rather than better.

A long-term goal of this line of research is to create a recursive reward modeling setup for training agents on tasks too complex or costly for humans to evaluate directly.[17] For example, if we wanted to train an agent to write a fantasy novel using reward modeling, we would need humans to read and holistically assess enough novels to train a reward model to match those assessments, which might be prohibitively expensive. But this would be easier if we had access to assistant agents which could extract a summary of the plotline, check spelling and grammar, summarize character development, assess the flow of the prose, and so on. Each of those assistants could in turn be trained via reward modeling.

The general term for a human working with AIs to perform tasks that the human could not by themselves is an amplification step, because it amplifies the capabilities of a human beyond what they would normally be capable of. Since recursive reward modeling involves a hierarchy of several of these steps, it is one example of a broader class of safety techniques known as iterated amplification.[30]In addition to techniques which make use of reinforcement learning, other proposed iterated amplification techniques rely on supervised learning, or imitation learning, to scale up human abilities.

Stuart Russell has advocated a new approach to the development of beneficial machines, in which:[15]:182

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

An early example of this approach is Russell and Ng's inverse reinforcement learning, in which AIs infer the preferences of human supervisors from those supervisors' behavior, by assuming that the supervisors act to maximize some reward function. More recently, Hadfield-Menell et al. have extended this paradigm to allow humans to modify their behavior in response to the AIs' presence, for example, by favoring pedagogically useful actions, which they call "assistance games", also known as cooperative inverse reinforcement learning.[15]:202 [34] Compared with debate and iterated amplification, assistance games rely more explicitly on specific assumptions about human rationality; it is unclear how to extend them to cases in which humans are systematically biased or otherwise suboptimal.

Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[35][36]is another major strand of research, which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper[37]with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Viktoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[22]This class of problems has been formalised using causal incentive diagrams.[37] Everitt and Hutter's current reward function algorithm[38]addresses it by designing agents which evaluate future actions according to their current reward function. This approach is also intended to prevent problems from more general self-modification which AIs might carry out.[39][35]

Other work in this area focuses on developing new frameworks and algorithms for other properties we might want to capture in our design specification.[35] For example, we would like our agents to reason correctly under uncertainty in a wide range of circumstances. As one contribution to this, Leike et al. provide a general way for Bayesian agents to model each other's policies in a multi-agent environment, without ruling out any realistic possibilities.[40]And the Garrabrant induction algorithm extends probabilistic induction to be applicable to logical, rather than only empirical, facts.[41]

Capability control proposals aim to increase our ability to monitor and control the behavior of AI systems, in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as our agents become more intelligent and their ability to exploit flaws in our control systems increases. Therefore, Bostrom and others recommend capability control methods only as a supplement to alignment methods.[1]

One challenge is that neural networks are by default highly uninterpretable.[42] This makes it more difficult to detect deception or other undesired behavior. Advances in interpretable artificial intelligence could be useful to mitigate this difficulty.[43]

One potential way to prevent harmful outcomes is to give human supervisors the ability to easily shut down a misbehaving AI via an "off-switch". However, in order to achieve their assigned objective, such AIs will have an incentive to disable any off-switches, or to run copies of themselves on other computers. This problem has been formalised as an assistance game between a human and an AI, in which the AI can choose whether to disable its off-switch; and then, if the switch is still enabled, the human can choose whether to press it or not.[44]A standard approach to such assistance games is to ensure that the AI interprets human choices as important information about its intended goals.[15]:208

Alternatively, Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents, can learn to become indifferent to whether their off-switch gets pressed.[3][45] This approach has the limitation that an AI which is completely indifferent to whether it is shut down or not is also unmotivated to care about whether the off-switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). More broadly, indifferent agents will act as if the off-switch can never be pressed, and might therefore fail to make contingency plans to arrange a graceful shutdown.[45][46]

An AI box is a proposed method of capability control in which an AI is run on an isolated computer system with heavily restricted input and output channelsfor example, text-only channels and no connection to the internet. While this reduces the AI's ability to carry out undesirable behavior, it also reduces its usefulness. However, boxing has fewer costs when applied to a question-answering system, which does not require interaction with the world in any case.

The likelihood of security flaws involving hardware or software vulnerabilities can be reduced by formally verifying the design of the AI box. Security breaches may also occur if the AI is able to manipulate the human supervisors into letting it out, via its understanding of their psychology.[47]

An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment.[48][49] A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general-purpose superintelligence, though an oracle could still create trillions of dollars worth of value.[15]:163 In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away.[15]:162163 His reasoning is that an oracle, being simpler than a general-purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.

Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general-purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked.[15]:162 Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers to reach a consensus.[50]

In contrast to endorsers of the thesis that rigorous control efforts are needed because superintelligence poses an existential risk, AI risk skeptics believe that superintelligence poses little or no risk of accidental misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[51] such as Gary Marcus,[52] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[53]

In 2021, the UK published its 10-year National AI Strategy.[54] According to the Strategy, the British government takes seriously "the long term risk of non-aligned Artificial General Intelligence".[55] The strategy describes actions to assess long term AI risks, including catastrophic risks.[56]

More here:

AI control problem - Wikipedia

Related Posts