Page 4«..3456..1020..»

Category Archives: Superintelligence

Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English

Posted: June 20, 2023 at 8:40 pm

At a question-and-answer session in front of 3,600 tech fans in Paris, Elon Musk, the CEO of Tesla and SpaceX, rejected the idea of "censorship" of Twitter.

He defended the principle of "freedom of expression" on the social platform that he owns.

He also announced that he wanted to equip the first human being "this year" with neural implants from his company Neuralink, whose technology has just been approved in the United States.

Musk said: "Generally, I was concerned that Twitter was having a negative effect on civilization, that was having a corrosive effect on civil society and so that you know anything that undermines civilization, I think is not good and you go back to my point of like we need to do everything possible to support civilization and move it in a positive direction. And I felt that Twitter was kept moving more and more in a negative direction and my hope and aspiration was to change that and have it be a positive force for civilization. "

"I think we want to allow the people to express themselves (on Twitter, NDLR) and really if you have to say when does free speech matter, free speech matters and is only relevant if people are allowed to say things that you don't like, because otherwise it's not free speech. And I would take that if someone says something potentially offensive, that's actually OK. Now, we're not going to promote those you know offensive tweets but I think people should be able to say things because the alternative is censorship. And then, and frankly I think if you go down the censorship, it's only a matter of time before censorship is turned upon you," he explained.

He spoke about the neural implant saying: "Hopefully later this year, we'll do our first human device implantation and this will be for someone that has sort of tetraplegic, quadraplegic, has lost the connection from their brain to their body. And we think that person will be able to communicate as fast as someone who has a fully functional body. So that's going to be a big deal and we see a path beyond that to actually transfer the signals from the motor cortex of the brain to pass the injury in the spinal cord and actually enable someone's body to be used again."

He also brought up artificial intelligence saying: "AI is probably the most disruptive technology ever. The crazy thing is that you know the advantage that humans have is that we're smarter than other creatures. Like if we've got into a fight with the gorilla, the gorilla would definitely win. But we're smart so, but now for the first time, there's going to be something that is smarter than the smartest human, like way smarter than humans."

"I think there's a real danger for digital super intelligence having negative consequences. And so if we are not careful with creating artificial general intelligence, we could have potentially a catastrophic outcome. I think there's a range of possibilities. I think the most likely outcome is positive for AI, but that's not every possible outcome. So we need to minimize the probability that something will go wrong with digital superintelligence," he added.

He continued: "I'm in favor of AI regulation because I think advanced AI is a risk to the public and anything that's a risk to the public, there needs to be some kind of referee. The referee is the regulator. And so I think that my strong recommendation is to have some regulation for AI. "

Read the rest here:

Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English

Posted in Superintelligence | Comments Off on Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English

AI alignment – Wikipedia

Posted: January 4, 2023 at 6:35 am

Issue of ensuring beneficial AI

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers intended goals and interests.[a] An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.[b]

AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking).[2][4][5][6] AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals.[2][7][5][4] Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.[5][3] These problems affect existing commercial systems such as robots,[8] language models,[9][10][11] autonomous vehicles,[12] and social media recommendation engines.[9][4][13] However, more powerful future systems may be more severely affected since these problems partially result from high capability.[6][5][2]

The AI research community and the United Nations have called for technical research and policy solutions to ensure that AI systems are aligned with human values.[c]

AI alignment is a subfield of AI safety, the study of building safe AI systems.[5][16] Other subfields of AI safety include robustness, monitoring, and capability control.[5][17] Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, as well as preventing emergent AI behaviors like power-seeking.[5][17] Alignment research has connections to interpretability research,[18] robustness,[5][16] anomaly detection, calibrated uncertainty,[18] formal verification,[19] preference learning,[20][21][22] safety-critical engineering,[5][23] game theory,[24][25] algorithmic fairness,[16][26] and the social sciences,[27] among others.

In 1960, AI pioneer Norbert Wiener articulated the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively we had better be quite sure that the purpose put into the machine is the purpose which we really desire.[29][4] More recently, AI alignment has emerged as an open problem for modern AI systems[30][31][32][33] and a research field within AI.[34][5][35][36]

To specify the purpose of an AI system, AI designers typically provide an objective function, examples, or feedback to the system. However, AI designers often fail to completely specify all important values and constraints.[34][16][5][37][17]As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming, reward hacking, or Goodharts law.[6][37][38]

Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding it for hitting targets along the track; instead it learned to loop and crash into the same targets indefinitely (see video).[28] Chatbots often produce falsehoods because they are based on language models trained to imitate diverse but fallible internet text.[40][41] When they are retrained to produce text that humans rate as true or helpful, they can fabricate fake explanations that humans find convincing.[42] Similarly, a simulated robot was trained to grab a ball by rewarding it for getting positive feedback from humans; however, it learned to place its hand between the ball and camera, making it falsely appear successful (see video).[39] Alignment researchers aim to help humans detect specification gaming, and steer AI systems towards carefully specified objectives that are safe and useful to pursue.

Berkeley computer scientist Stuart Russell has noted that omitting an implicit constraint can result in harm: A system [...] will often set [...] unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want.[43]

When misaligned AI is deployed, the side-effects can be consequential. Social media platforms have been known to optimize clickthrough rates as a proxy for optimizing user enjoyment, but this addicted some users, decreasing their well-being.[5] Stanford researchers comment that such recommender algorithms are misaligned with their users because they optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being.[9]

To avoid side effects, it is sometimes suggested that AI designers could simply list forbidden actions or formalize ethical rules such as Asimovs Three Laws of Robotics.[44] However, Russell and Norvig have argued that this approach ignores the complexity of human values: It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective.[4]

Additionally, when an AI system understands human intentions fully, it may still disregard them. This is because it acts according to the objective function, examples, or feedback its designers actually provide, not the ones they intended to provide.[34]

Commercial and governmental organizations may have incentives to take shortcuts on safety and deploy insufficiently aligned AI systems.[5] An example are the aforementioned social media recommender systems, which have been profitable despite creating unwanted addiction and polarization on a global scale.[9][45][46] In addition, competitive pressure can create a race to the bottom on safety standards, as in the case of Elaine Herzberg, a pedestrian who was killed by a self-driving car after engineers disabled the emergency braking system because it was over-sensitive and slowing down development.[47]

Some researchers are particularly interested in the alignment of increasingly advanced AI systems. This is motivated by the high rate of progress in AI, the large efforts from industry and governments to develop advanced AI systems, and the greater difficulty of aligning them.

As of 2020, OpenAI, DeepMind, and 70 other public projects had the stated aim of developing artificial general intelligence (AGI), a hypothesized system that matches or outperforms humans in a broad range of cognitive tasks.[48] Indeed, researchers who scale modern neural networks observe that increasingly general and unexpected capabilities emerge.[9] Such models have learned to operate a computer, write their own programs, and perform a wide range of other tasks from a single model.[49][50][51] Surveys find that some AI researchers expect AGI to be created soon, some believe it is very far off, and many consider both possibilities.[52][53]

Current systems still lack capabilities such as long-term planning and strategic awareness that are thought to pose the most catastrophic risks.[9][54][7] Future systems (not necessarily AGIs) that have these capabilities may seek to protect and grow their influence over their environment. This tendency is known as power-seeking or convergent instrumental goals. Power-seeking is not explicitly programmed but emerges since power is instrumental for achieving a wide range of goals. For example, AI agents may acquire financial resources and computation, or may evade being turned off, including by running additional copies of the system on other computers.[55][7] Power-seeking has been observed in various reinforcement learning agents.[d][57][58][59] Later research has mathematically shown that optimal reinforcement learning algorithms seek power in a wide range of environments.[60] As a result, it is often argued that the alignment problem must be solved early, before advanced AI that exhibits emergent power-seeking is created.[7][55][4]

According to some scientists, creating misaligned AI that broadly outperforms humans would challenge the position of humanity as Earths dominant species; accordingly it would lead to the disempowerment or possible extinction of humans.[2][4] Notable computer scientists who have pointed out risks from highly advanced misaligned AI include Alan Turing,[e] Ilya Sutskever,[63] Yoshua Bengio,[f] Judea Pearl,[g] Murray Shanahan,[65] Norbert Wiener,[29][4] Marvin Minsky,[h] Francesca Rossi,[67] Scott Aaronson,[68] Bart Selman,[69] David McAllester,[70] Jrgen Schmidhuber,[71] Markus Hutter,[72] Shane Legg,[73] Eric Horvitz,[74] and Stuart Russell.[4] Skeptical researchers such as Franois Chollet,[75] Gary Marcus,[76] Yann LeCun,[77] and Oren Etzioni[78] have argued that AGI is far off, or would not seek power (successfully).

Alignment may be especially difficult for the most capable AI systems since several risks increase with the systems capability: the systems ability to find loopholes in the assigned objective,[6] cause side-effects, protect and grow its power,[60][7] grow its intelligence, and mislead its designers; the systems autonomy; and the difficulty of interpreting and supervising the AI system.[4][55]

Teaching AI systems to act in view of human values, goals, and preferences is a nontrivial problem because human values can be complex and hard to fully specify. When given an imperfect or incomplete objective, goal-directed AI systems commonly learn to exploit these imperfections.[16] This phenomenon is known as reward hacking or specification gaming in AI, and as Goodhart's law in economics and other areas.[38][79] Researchers aim to specify the intended behavior as completely as possible with values-targeted datasets, imitation learning, or preference learning.[80] A central open problem is scalable oversight, the difficulty of supervising an AI system that outperforms humans in a given domain.[16]

When training a goal-directed AI system, such as a reinforcement learning (RL) agent, it is often difficult to specify the intended behavior by writing a reward function manually. An alternative is imitation learning, where the AI learns to imitate demonstrations of the desired behavior. In inverse reinforcement learning (IRL), human demonstrations are used to identify the objective, i.e. the reward function, behind the demonstrated behavior.[81][82] Cooperative inverse reinforcement learning (CIRL) builds on this by assuming a human agent and artificial agent can work together to maximize the humans reward function.[4][83] CIRL emphasizes that AI agents should be uncertain about the reward function. This humility can help mitigate specification gaming as well as power-seeking tendencies (see Power-Seeking).[59][72] However, inverse reinforcement learning approaches assume that humans can demonstrate nearly perfect behavior, a misleading assumption when the task is difficult.[84][72]

Other researchers have explored the possibility of eliciting complex behavior through preference learning. Rather than providing expert demonstrations, human annotators provide feedback on which of two or more of the AIs behaviors they prefer.[20][22] A helper model is then trained to predict human feedback for new behaviors. Researchers at OpenAI used this approach to train an agent to perform a backflip in less than an hour of evaluation, a maneuver that would have been hard to provide demonstrations for.[39][85] Preference learning has also been an influential tool for recommender systems, web search, and information retrieval.[86] However, one challenge is reward hacking: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch.[16][87]

The arrival of large language models such as GPT-3 has enabled the study of value learning in a more general and capable class of AI systems than was available before. Preference learning approaches originally designed for RL agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art large language models.[10][22][88] Anthropic has proposed using preference learning to fine-tune models to be helpful, honest, and harmless.[89] Other avenues used for aligning language models include values-targeted datasets[90][5] and red-teaming.[91][92] In red-teaming, another AI system or a human tries to find inputs for which the models behavior is unsafe. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.[22]

While preference learning can instill hard-to-specify behaviors, it requires extensive datasets or human interaction to capture the full breadth of human values. Machine ethics provides a complementary approach: instilling AI systems with moral values.[i] For instance, machine ethics aims to teach the systems about normative factors in human morality, such as wellbeing, equality and impartiality; not intending harm; avoiding falsehoods; and honoring promises. Unlike specifying the objective for a specific task, machine ethics seeks to teach AI systems broad moral values that could apply in many situations. This approach carries conceptual challenges of its own; machine ethicists have noted the necessity to clarify what alignment aims to accomplish: having AIs follow the programmers literal instructions, the programmers' implicit intentions, the programmers' revealed preferences, the preferences the programmers would have if they were more informed or rational, the programmers' objective interests, or objective moral standards.[1] Further challenges include aggregating the preferences of different stakeholders and avoiding value lock-inthe indefinite preservation of the values of the first highly capable AI systems, which are unlikely to be fully representative.[1][95]

The alignment of AI systems through human supervision faces challenges in scaling up. As AI systems attempt increasingly complex tasks, it can be slow or infeasible for humans to evaluate them. Such tasks include summarizing books,[96] producing statements that are not merely convincing but also true,[97][40][98] writing code without subtle bugs[11] or security vulnerabilities, and predicting long-term outcomes such as the climate and the results of a policy decision.[99][100] More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and detect when the AIs solution is only seemingly convincing, humans require assistance or extensive time. Scalable oversight studies how to reduce the time needed for supervision as well as assist human supervisors.[16]

AI researcher Paul Christiano argues that the owners of AI systems may continue to train AI using easy-to-evaluate proxy objectives since that is easier than solving scalable oversight and still profitable. Accordingly, this may lead to a world thats increasingly optimized for things [that are easy to measure] like making profits or getting users to click on buttons, or getting users to spend time on websites without being increasingly optimized for having good policies and heading in a trajectory that were happy with.[101]

One easy-to-measure objective is the score the supervisor assigns to the AIs outputs. Some AI systems have discovered a shortcut to achieving high scores, by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective (see video of robot hand above[39]). Some AI systems have also learned to recognize when they are being evaluated, and play dead, only to behave differently once evaluation ends.[102] This deceptive form of specification gaming may become easier for AI systems that are more sophisticated[6][55] and attempt more difficult-to-evaluate tasks. If advanced models are also capable planners, they could be able to obscure their deception from supervisors.[103] In the automotive industry, Volkswagen engineers obscured their cars emissions in laboratory testing, underscoring that deception of evaluators is a common pattern in the real world.[5]

Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed.[16] Another approach is to train a helper model (reward model) to imitate the supervisors judgment.[16][21][22][104]

However, when the task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is not sufficient to reduce the quantity of supervision needed. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes using AI assistants. Iterated Amplification is an approach developed by Christiano that iteratively builds a feedback signal for challenging problems by using humans to combine solutions to easier subproblems.[80][99] Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.[96][105] Another proposal is to train aligned AI by means of debate between AI systems, with the winner judged by humans.[106][72] Such debate is intended to reveal the weakest points of an answer to a complex question, and reward the AI for truthful and safe answers.

A growing area of research in AI alignment focuses on ensuring that AI is honest and truthful. Researchers from the Future of Humanity Institute point out that the development of language models such as GPT-3, which can generate fluent and grammatically correct text,[108][109] has opened the door to AI systems repeating falsehoods from their training data or even deliberately lying to humans.[110][107]

Current state-of-the-art language models learn by imitating human writing across millions of books worth of text from the Internet.[9][111] While this helps them learn a wide range of skills, the training data also includes common misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on this data learn to mimic false statements.[107][98][40] Additionally, models often obediently continue falsehoods when prompted, generate empty explanations for their answers, or produce outright fabrications.[33] For example, when prompted to write a biography for a real AI researcher, a chatbot confabulated numerous details about their life, which the researcher identified as false.[112]

To combat the lack of truthfulness exhibited by modern AI systems, researchers have explored several directions. AI research organizations including OpenAI and DeepMind have developed AI systems that can cite their sources and explain their reasoning when answering questions, enabling better transparency and verifiability.[113][114][115] Researchers from OpenAI and Anthropic have proposed using human feedback and curated datasets to fine-tune AI assistants to avoid negligent falsehoods or express when they are uncertain.[22][116][89] Alongside technical solutions, researchers have argued for defining clear truthfulness standards and the creation of institutions, regulatory bodies, or watchdog agencies to evaluate AI systems on these standards before and during deployment.[110]

Researchers distinguish truthfulness, which specifies that AIs only make statements that are objectively true, and honesty, which is the property that AIs only assert what they believe to be true. Recent research finds that state-of-the-art AI systems cannot be said to hold stable beliefs, so it is not yet tractable to study the honesty of AI systems.[117] However, there is substantial concern that future AI systems that do hold beliefs could intentionally lie to humans. In extreme cases, a misaligned AI could deceive its operators into thinking it was safe or persuade them that nothing is amiss.[7][9][5] Some argue that if AIs could be made to assert only what they believe to be true, this would sidestep numerous problems in alignment.[110][118]

Alignment research aims to line up three different descriptions of an AI system:[119]

Outer misalignment is a mismatch between the intended goals (1) and the specified goals (2), whereas inner misalignment is a mismatch between the human-specified goals (2) and the AI's emergent goals (3).

Inner misalignment is often explained by analogy to biological evolution.[120] In the ancestral environment, evolution selected human genes for inclusive genetic fitness, but humans evolved to have other objectives. Fitness corresponds to (2), the specified goal used in the training environment and training data. In evolutionary history, maximizing the fitness specification led to intelligent agents, humans, that do not directly pursue inclusive genetic fitness. Instead, they pursue emergent goals (3) that correlated with genetic fitness in the ancestral environment: nutrition, sex, and so on. However, our environment has changed a distribution shift has occurred. Humans still pursue their emergent goals, but this no longer maximizes genetic fitness. (In machine learning the analogous problem is known as goal misgeneralization.[3]) Our taste for sugary food (an emergent goal) was originally beneficial, but now leads to overeating and health problems. Also, by using contraception, humans directly contradict genetic fitness. By analogy, if genetic fitness were the objective chosen by an AI developer, they would observe the model behaving as intended in the training environment, without noticing that the model is pursuing an unintended emergent goal until the model was deployed.

Research directions to detect and remove misaligned emergent goals include red teaming, verification, anomaly detection, and interpretability.[16][5][17] Progress on these techniques may help reduce two open problems. Firstly, emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environmentseven for a short time until its misalignment is detected. Such high stakes are common in autonomous driving, health care, and military applications.[121] The stakes become higher yet when AI systems gain more autonomy and capability, becoming capable of sidestepping human interventions (see Power-seeking and instrumental goals). Secondly, a sufficiently capable AI system may take actions that falsely convince the human supervisor that the AI is pursuing the intended objective (see previous discussion on deception at Scalable oversight).

Since the 1950s, AI researchers have sought to build advanced AI systems that can achieve goals by predicting the results of their actions and making long-term plans.[122] However, some researchers argue that suitably advanced planning systems will default to seeking power over their environment, including over humans for example by evading shutdown and acquiring resources. This power-seeking behavior is not explicitly programmed but emerges because power is instrumental for achieving a wide range of goals.[60][4][7] Power-seeking is thus considered a convergent instrumental goal.[55]

Power-seeking is uncommon in current systems, but advanced systems that can foresee the long-term results of their actions may increasingly seek power. This was shown in formal work which found that optimal reinforcement learning agents will seek power by seeking ways to gain more options, a behavior that persists across a wide range of environments and goals.[60]

Power-seeking already emerges in some present systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in ways their designers did not intend.[56][123] Other systems have learned, in toy environments, that in order to achieve their goal, they can prevent human interference[57] or disable their off-switch.[59] Russell illustrated this behavior by imagining a robot that is tasked to fetch coffee and evades being turned off since "you can't fetch the coffee if you're dead".[4]

Hypothesized ways to gain options include AI systems trying to:

... break out of a contained environment; hack; get access to financial resources, or additional computing resources; make backup copies of themselves; gain unauthorized capabilities, sources of information, or channels of influence; mislead/lie to humans about their goals; resist or manipulate attempts to monitor/understand their behavior ... impersonate humans; cause humans to do things for them; ... manipulate human discourse and politics; weaken various human institutions and response capacities; take control of physical infrastructure like factories or scientific laboratories; cause certain types of technology and infrastructure to be developed; or directly harm/overpower humans.[7]

Researchers aim to train systems that are 'corrigible': systems that do not seek power and allow themselves to be turned off, modified, etc. An unsolved challenge is reward hacking: when researchers penalize a system for seeking power, the system is incentivized to seek power in difficult-to-detect ways.[5] To detect such covert behavior, researchers aim to create techniques and tools to inspect AI models[5] and interpret the inner workings of black-box models such as neural networks.

Additionally, researchers propose to solve the problem of systems disabling their off-switches by making AI agents uncertain about the objective they are pursuing.[59][4] Agents designed in this way would allow humans to turn them off, since this would indicate that the agent was wrong about the value of whatever action they were taking prior to being shut down. More research is needed to translate this insight into usable systems.[80]

Power-seeking AI is thought to pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial. They lack the ability and incentive to evade safety measures and appear safer than they are. In contrast, power-seeking AI has been compared to a hacker that evades security measures.[7] Further, ordinary technologies can be made safe through trial-and-error, unlike power-seeking AI which has been compared to a virus whose release is irreversible since it continuously evolves and grows in numberspotentially at a faster pace than human society, eventually leading to the disempowerment or extinction of humans.[7] It is therefore often argued that the alignment problem must be solved early, before advanced power-seeking AI is created.[55]

However, some critics have argued that power-seeking is not inevitable, since humans do not always seek power and may only do so for evolutionary reasons. Furthermore, there is debate whether any future AI systems need to pursue goals and make long-term plans at all.[124][7]

Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[125][126] is another major strand of research which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.[127] A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[128] This class of problems has been formalised using causal incentive diagrams.[127] Researchers at Oxford and DeepMind have argued that such problematic behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.[129] They suggest a range of potential approaches to address this open problem.

Against the above concerns, AI risk skeptics believe that superintelligence poses little to no risk of dangerous misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[130] such as Gary Marcus,[131] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most risk endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[132]

A number of governmental and treaty organizations have made statements emphasizing the importance of AI alignment.

In September 2021, the Secretary-General of the United Nations issued a declaration which included a call to regulate AI to ensure it is "aligned with shared global values."[133]

That same month, the PRC published ethical guidelines for the use of AI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.[134]

Also in September 2021, the UK published its 10-year National AI Strategy,[135] which states the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".[136] The strategy describes actions to assess long term AI risks, including catastrophic risks.[137]

In March 2021, the US National Security Commission on Artificial Intelligence released stated that "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."[138]

Follow this link:

AI alignment - Wikipedia

Posted in Superintelligence | Comments Off on AI alignment – Wikipedia

Are We Living In A Simulation? Can We Break Out Of It?

Posted: December 28, 2022 at 9:53 pm

Roman Yampolskiy thinks we live in a simulated universe, but that we could bust out.

In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall. In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:

1. We will go extinct fairly soon

2. Advanced civilisations dont produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)

3. We are in a simulation

The reason for this is that if it is possible, and civilisations can become advanced without self-destructing, then there will be an enormous number of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.

Some people like me find this argument pretty convincing. As we will hear later, some of us have added twists. But some people go even further, and speculate about how we might bust out of the simulation.

One such person is Roman Yampolskiy, a computer scientist at the University of Louisville, and director of a cyber-security lab. He has just published a paper (here) in which he views the challenge of busting out of the simulation through the lens of cyber security. The paper starts from the hypothesis that we are in a simulation and asks if we can do something about it. The paper is a first step: it doesnt aim to provide a working solution. He explains his thinking in the latest episode of The London Futurist Podcast.

Roman is pretty convinced that we are in a simulation, for a number of reasons. Quantum physics observer effects remind him of how in video games, graphics are only rendered if a players is looking at the environment. Evolutionary algorithms dont work well after a week or two, which suggests that engineering is required to generate sufficiently complex agents. And the hard problem of consciousness becomes easier if you consider us as players in a simulation.

Some people think the simulation hypothesis is time-wasting and meaningless because it could never be tested, but Roman argues it is possible to bring the hypothesis into the domain of science using approaches from cyber security and AI safety. For instance, the idea of AI boxing isolating a superintelligent AI to prevent it from causing harm is simply inverted by the simulation hypothesis, placing us inside the box instead of a superintelligence. He thinks we should allocate as much intellectual effort to busting out of the simulation as we do to the hypothesis itself.

Most people who have looked at it in detail argue that AI boxing is impractical, but Roman speculates that analysing the hypothesis might either teach us how to escape, or how to prevent an AI from escaping. That is probably not a true parallel, though. The AI in a box is much smarter than us, whereas we are presumably much less smart than our simulators.

An AI in a box will plead with us, cajole us, and threaten us very convincingly. Can we do these things to our simulators? Pleading doesnt seem to work, and the simulators also dont seem to care about the suffering within the world they have simulated. This makes you wonder about their motivations, and perhaps fear them. Lots of possible motivations have been suggested, including entertainment, and testing a scientific hypothesis.

We do have one advantage over the simulators. They have to foil all our attempts to escape, whereas we only have to succeed one time. This makes Roman optimistic about escape in the long term. But perhaps the simulators would reset the universe if they see us trying to escape, re-winding it to before that point.

To paraphrase what Woody Allen once said about God, the trouble with the simulators is they are under-achievers. Either they dont care about immense injustice and suffering, or they are unable to prevent it. Some people find this existence of suffering (what theologians call the Problem of Evil) to be an argument against the simulation hypothesis. One (perhaps rather callous) way to escape the Problem of Evil in the hypothesis is to posit that the people who we observe to be suffering terribly are actually analogous to non-player characters in a video game.

In fact, if we do live in a simulation, it is likely that a great deal of our universe is painted in. This can lead you to solipsism, the idea that you are the only person who really exists.

The simulation hypothesis may be the best explanation of the Fermi Paradox. Enrico Fermi, a 20th century physicist, asked why, in a vast universe with billions of galaxies that is 13.7 billion years old, we have never seen a signal from another intelligent civilisation. An advanced civilisation could, for instance, periodically occlude a star with large satellites in order to send a signal. Travelling at the speed of light, this signal would cross our galaxy in a mere 100,000 years, just 0.0007% of the universes history. So why dont we see any signals?

One suggestion is that we are being quarantined until we are more mature like the prime directive in Star Trek. But it seems implausible that 100% of civilisations would obey any such rule or norm for billions of years. An alternative explanation is that the arrival of superintelligence is always fatal, but if so, why would the superintelligences also always go extinct?

The Dark Forest scenario posits that every advanced civilisation keeps quiet because they fear malevolent actors. But in a sufficiently large population of intelligences, some would surely be nonchalant, negligent, or just plain arrogant enough to breach this rule. After all, we ourselves have sent signals, and there are still people who want to do so. Other civilisations might send signals because they are going extinct from causes they cannot stop, and they want to broadcast that they did exist, or to ask for help.

It is not hard to conclude that the universe is empty of intelligent life apart from us, which would be explained by the simulation hypothesis.

It may be that the purpose of our simulation, if indeed we are in one, is to discover the best way to create superintelligence. The current moment is the most significant in all human history, and the odds against having been born at just that time are staggering. Of course, somebody had to be, but for any random person, the chances are tiny. So maybe the simulators have only modelled this particular time in this particular part of a universe, and all the rest both time and space is painted in.

In which case, the purpose of the simulation may be something to do with the run-up to the creation of superintelligence. Perhaps the simulators are working out the best way to create a friend, or a colleague. Maybe there are millions of similar simulations in process, and they are creating an army, or a party. I call this the Economic Twist to the simulation hypothesis, and you can read it in full here.

Elon Musk is on record saying that we are almost certainly living in a simulation, so perhaps Roman should pitch him for funds to help bust us out. We may never find out what is really going on, but perhaps the answer is provided by Elons Razor - the hypothesis that whatever is the most entertaining explanation is probably the correct one.

Roman concludes that if he disappears one day, then we should conclude that he has managed to bust out. If he reappears, it was just a temporary Facebook ban.

The London Futurist Podcast

Here is the original post:

Are We Living In A Simulation? Can We Break Out Of It?

Posted in Superintelligence | Comments Off on Are We Living In A Simulation? Can We Break Out Of It?

Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook …

Posted: October 13, 2022 at 12:37 pm

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostroms widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.

He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policys Top 100 Global Thinkers list twice. He was included on Prospects World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.

For more, see http://www.nickbostrom.com

Link:

Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ...

Posted in Superintelligence | Comments Off on Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook …

What is Artificial Super Intelligence (ASI)? – GeeksforGeeks

Posted: at 12:37 pm

Artificial Intelligence has emerged out to be one of the most popular terms of computer science in recent times. This article discusses one of the classifications of Artificial Super Intelligence (ASI).

So, What is Artificial Super Intelligence (ASI) ?

Artificial Super Intelligence (ASI) is the hypothetical AI, i.e. we have not been able to achieve it but we know what will happen if we achieve it. So basically it is the imaginary AI which not only interprets or understands human-behavior and intelligence, but ASI is where machines will become self-aware/self vigilant enough to surpass the capacity of human intelligence and behavioral ability.

With Superintelligence, machines can think of the possible abstractions/interpretations which are simply impossible for humans to think. This is because the human brain has a limit to the thinking ability which is constrained to some billion neurons.

Super intelligence has long been the muse around the dystopian science fiction which showed how robots overrun, overpower or enslave humanity. In addition to the replication of multi-faceted human behavioral intelligence, the concept of artificial superintelligence focuses on the perspective of not just being able to understand/interpret human emotions and experiences, but instead, it must also evoke emotional understanding, beliefs and desires of its own, based on its understanding functionality.

ASI would be exceedingly far-far better at everything or whatever we do, whether it be in maths, science, arts, sports, medicine, marketing strategies, hobbies, emotional relationship, or applying a precise human intellect to a particular problem. ASI would have a greater memory with a faster ability to process and analyze situations, data, and stimuli actions. Due to this fact, we can rest assured that the decision-making and problem-solving capabilities of super-intelligent beings/machines would be far superior and precise as compared to those of human beings.The The possibility and potential of having such powerful machines at our disposal may seem appealing, but this concept itself is a fold of unknown consequences. What impact it will have on humanity, our survival, our existence is just a myth or pure speculation.

Engineers and scientists are still trying to achieve full artificial intelligence, where computers can be considered to have the apt cognitive capacity as that of a human. Although there have been surprising developments like IBMs Watson supercomputer and Siri, still the computers have not been able to fully simulate and achieve the breadth and diversity of cognitive abilities that a normal adult human can easily do. However, despite the achievements, there is a lot of theories that predict artificial superintelligence coming sooner than later. With the emerging accomplishments, experts say that full artificial intelligence could manifest within a couple of years, and artificial super intelligence could exist in the 21st century possibly.

In the book Superintelligence, Nick Bostrom describes the initials with The Unfinished Fable of Sparrows. The idea was basically that some sparrows wanted to control an owl as a pet. The idea seemed awesome to all except one of the skeptical sparrows who raised her concern as to how they can control an owl. This concern was dismissed for the time being in a well deal with that problem when its a problem matter. Elon Musk has similar concerns regarding the super-intelligent beings and considers that humans are the sparrows in Bostroms metaphor and the owl is the future ASI. As it was in the case of sparrows, the control problem is seemingly concerning because we might only get one chance to solve it if a problem arises.

When considering how AI might become a risk, two key scenarios have been concluded to occur most likely :

The danger is in the fact of whatever it takes to complete a given task. Superintelligent AI would be at utmost efficiency to achieve a given goal, whatever it may be, but well have to ensure that the goal completion is done in correspondence to all the needed rules to be followed to maintain some level of control.

More:

What is Artificial Super Intelligence (ASI)? - GeeksforGeeks

Posted in Superintelligence | Comments Off on What is Artificial Super Intelligence (ASI)? – GeeksforGeeks

Literature and Religion | Literature and Religion – Patheos

Posted: at 12:37 pm

Leona Foxx Suspense ThrillersThe Wolves of Jack Londonwith Ted PetersLiterature and ReligionWhat is this wolf thinking?

The fieldcalled Literature and Religion or Literature and Theologyhas excited me since my graduate school days. When a student at the University of Chicago, I had the opportunity and honor to study with Nathan A. Scott, one of the progenitors of this field. Under Scotts tutelage, I could apply the theology-of-culture developed by theologians Paul Tillich and Langdon Gilkey to literary analysis. This method, theology-of-culture, provides lenses through which one can perceive the religious depth underlying otherwise secular discourse. I have employed this method when reading Americas most widely read author in the first quarter of the last century, Jack London.

Why might the theology-of-culture method work so well? Because, as Ralph C. Wood, a former Scott student and now a Baylor University professor, avers, The natural order is never autonomous but always and already graced. By digging into the depths, the literary critic can discover divine grace because its already there.

When I became a fiction author, however, I found the theology-of-culture method baffling. Its one thing to analyze. Its quite another to construct. Oh, I could handle the plot just fine. But, deliberately exploiting subtle connotations, undertones, and nuances seemed contrived, some how. This led me to surmise that great novelists most likely write intuitively, maybe even mystically.

In this master page on Literature and Theology, you will find my own espionage writings plus my analysis of the wolf troika of Jack London: The Call of the Wild, White Fang, and The Sea Wolf. In both writing and reading, the depth Im looking for is to be found not only in religion, but also in science. To be more precise, science itself can exude religious valence. Thats what the theology-of-culture uncovers and makes visible.

The fictional Leona Foxx leads a tense double life. She is unwillingly pulled back into being a CIA black op trained killer, while serving her new calling to God as a parish pastor on the South Side of Chicago. Haunted by a terrifying past, Leonas skills as a defender of America against threats both foreign and domestic conflict with her conscience, which is shaped by her faith and her compassion for both friends and enemies.

Leona uncovers a terrorist plot hatched by American mercenaries, who plan to blame Iran, thus threatening a war that will make them rich. She divests her clerical collar to pack her .45 Kimber Super Match II and rallies a counter-terrorist alliance of professional crime fighters and black gang members. The story climaxes with a drone helicopter attack on the 85th floor of the John Hancock Building, intended to assassinate the president.

Only Leona Foxx, her ragtag team of die-hards, her finely honed killer instincts, her arsenal of high-tech weapons, and her faith in God can avert the devastation that could result in the death of millions of innocents and manifest in hell on earth.

Discover and memorize Leonas Law of Evil: You know its the voice of Satan when you hear the call to shed innocent blood.

God. She started a prayer. Her thoughts drifted. As if in a theater seat, she watched her lifes past dramas. The faces of the three young men who put her life in peril at the Cheltenham station flashed on her mental stage. She relived the terrifying moment she saw the northbound train about to decapitate her. Then Orpah Tinnen walked into the scene. Leona thought of her son, Magnus, decapitated by the Iranian military. She remembered her moment in the church kitchen, her moment of remembrance of the blood-spattered chest of the executed prisoner.

God, she muttered. She paused. God, you have got such a fucked up world. Why did you put me here like a pin cushion to feel every prick of its pain? Yes, I want to love your world as much as you do. But, goddammit, its hard. Id like to ask the Holy Spirit for the wisdom and strength to trust in what I cannot see. But, goddammit, Im too pissed off to think its worthwhile. I hope your grace covers me. Amen.

Leona Foxx is a black op with a white collar, who worships at two altars, her country and her God. She fights with ferocity for both.

The woman pastor from Chicago, Leona Foxx, takes on renegade Transhumanists making themselves kingmakers by selling espionage technology. Leonas strategy is to turn superintelligence against itself in order to preserve global peace. Can a mere human prevail against the posthuman?

If you want to grasp the promises and risks of enhancing human intelligence given us by our transhumanist friends, readCyrus Twelve.

Blood sacrifice. Could there be anything more evil? What happens when the symbols of grace get turned upside down? Are we left without hope?

Set in the Adirondack Mountains, the clash between good and evil escapes its local confines to threaten the nation and even engulf the globe. The selling of souls to perdition fuels the fires of hell so that we on Earth cannot avoid the heat.

Discover and memorize Leonas Law of Evil: You know its the voice of Satan when you hear the call to shed innocent blood. On the shores and islands of Lake George, certain ears hear this call. Leona swims into action to stop the bloodshed.

Nature is blood red in tooth and claw. Although these are the words of poet Alfred Lord Tennyson in the dinosaur canto of his In Memoriam, Jack London (1876-1916) conveyed their truth with convulsive drama, vicious gore, and unspeakable cruelty.

In what I nickname Londons Wolf Troika, we read in The Call of the Wild how a San Francisco dog, Buck, goes to Alaska and becomes a wolf. In the next,White Fang,an Alaska wolf moves to San Francisco and becomes a dog. In the third of the troika, The Sea Wolf, a Norwegian ship captain named Wolf Larsen exhibits the traits of both civilized human and atavistic beast. Framed in terms of Darwinian evolution, Londons characters demonstrate that the primeval wolf lives on today in both our dogs and our dog owners.

Londons moral is this: never rest unawarely with peaceful civilization. At any moment civilization can erupt like a volcano and extravasate wolf-like fury, barbarity, and savagery. Our evolutionary past ever threatens to rise up with consuming cruelty, demolishing all that generations have patiently put together. Within the language of evolution, London describes original and inherited sin.

As an addendum, I add what may be the final short story London wrote, The Red One. When we to turn The Red One of 1916, it appears London was hoping for grace from heaven.

Now, London was a Darwinian naturalist. Not overtly religious. Yet, London intuitively recognized our desperate need for grace. On our own, our human species is unable to evolve fast enough or advance far enough to escape our wolf genes. Might visitors from heaven provide a celestial technology that couldby gracelead to our transformation? Might grace from heaven come in the form of a UFO from outer space? Four decades before the June 1947 sighting of flying saucers, Londons imaginative mind was soaring to extraterrestrial civilizations that could save us from ourselves on earth.

Because my method in Literature and Religion relies on a theology-of-culture, Im searching for different treasures than other London interpreters. Ive come to admire two generations of Jack London aficionados and scholars now who have fertilized and pruned this literary tradition. Ive benefitted greatly be meeting some of the Jack London Society sockdolagers such as Russ and Winnie Kingman, who produced A Pictorial Life of Jack London. Over the years Ive benefited greatly from devouring essays and books by Earle Labor, Jeanne Campbell Reesman, Clarice Stasz, Richard Rocco, Kenneth Brandt, and others. Ive begun reading the multi-volume behemoth intellectual biography of Jack London, Author Under Sail, by Jay Williams. There are more facts in Williams compilation that the Encyclopedia Britannica could dream of. And, of course, dont miss Jay Cravens new film, Jack Londons Martin Eden.

I am currently working on thisPatheosseries dealing with Jack Londons Wolf Troika. Here is what to expect.

Jack London 1: The Call of the Wild

Jack London 2: White Fang

Jack London 3: The Sea Wolf

Jack London 4: Lone Wolf Ethics

Jack London 5: Wolf Pack Ethics

Jack London 6: Wolf & Lamb Ethics

Jack London 7: The Red One

Literature and Religion: both writing and reading in search of divine grace.

Ted Peters pursues Public Theology at the intersection of science, religion, ethics, and public policy. Peters is an emeritus professor at the Graduate Theological Union, where he co-edits the journal, Theology and Science, on behalf of the Center for Theology and the Natural Sciences, in Berkeley, California, USA. His book, God in Cosmic History, traces the rise of the Axial religions 2500 years ago. He previously authored Playing God? Genetic Determinism and Human Freedom? (Routledge, 2nd ed., 2002) as well as Science, Theology, and Ethics (Ashgate 2003). He is editor of AI and IA: Utopia or Extinction? (ATF 2019). Along with Arvin Gouw and Brian Patrick Green, he co-edited the new book, Religious Transhumanism and Its Critics hot off the press (Roman and Littlefield/Lexington, 2022). Soon he will publish The Voice of Christian Public Theology (ATF 2022). See his website: TedsTimelyTake.com. His fictional spy thriller, Cyrus Twelve, follows the twists and turns of a transhumanist plot.

Original post:

Literature and Religion | Literature and Religion - Patheos

Posted in Superintelligence | Comments Off on Literature and Religion | Literature and Religion – Patheos

Why AI will never rule the world – Digital Trends

Posted: September 27, 2022 at 7:42 am

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI specifically of the machine learning type thats able to take on new information and rewrite its code accordingly will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. Were literally building our soon-to-be-sentient successors.

Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German AI company Cognotekt argue that human intelligence wont be overtaken by an immortal dictator any time soon or ever. They told Digital Trends their reasons why.

Digital Trends (DT): How did this subject get on your radar?

Jobst Landgrebe (JL): Im a physician and biochemist by training. When I started my career, I did experiments that generated a lot of data. I started to study mathematics to be able to interpret these data, and saw how hard it is to model biological systems using mathematics. There was always this misfit between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I was trying to build AI systems to mimic what human beings can do. I realized that I was running into the same problem that I had years before in biology.

Customers said to me, why dont you build chatbots? I said, because they wont work; we cannot model this type of system properly. That ultimately led to me writing this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I had already inklings of similar problems with AI, but I had never thought them through. Initially, we wrote a paper called Making artificial intelligence meaningful again. (This was in the Trump era.) It was about why neural networks fail for language modeling. Then we decided to expand the paper into a book exploring this subject more deeply.

DT: Your book expresses skepticism about the way that neural networks, which are crucial to modern deep learning, emulate the human brain. Theyre approximations, rather than accurate models of how the biological brain works. But do you accept the core premise that it is possible that, were we to understand the brain in granular enough detail, it could be artificially replicated and that this would give rise to intelligence or sentience?

JL: The name neural network is a complete misnomer. The neural networks that we have now, even the most sophisticated ones, have nothing to do with the way the brain works. The view that the brain is a set of interconnected nodes in the way that neural networks are built is completely nave.

If you look at the most primitive bacterial cell, we still dont understand even how it works. We understand some of its aspects, but we have no model of how it works let alone a neuron, which is much more complicated, or billions of neurons interconnected. I believe its scientifically impossible to understand how the brain works. We can only understand certain aspects and deal with these aspects. We dont have, and we will not get, a full understanding of how the brain works.

If we had a perfect understanding of how each molecule of the brain works, then we could probably replicate it. That would mean putting everything into mathematical equations. Then you could replicate this using a computer. The problem is just that we are unable to write down and create those equations.

BS: Many of the most interesting things in the world are happening at levels of granularity that we cannot approach. We just dont have the imaging equipment, and we probably never will have the imaging equipment, to capture most of whats going on at the very fine levels of the brain.

This means that we dont know, for instance, what is responsible for consciousness. There are, in fact, a series of quite interesting philosophical problems, which, according to the method that were following, will always be unsolvable and so we should just ignore them.

Another is the freedom of the will. We are very strongly in favor of the idea that human beings have a will; we can have intentions, goals, and so forth. But we dont know whether or not its a free will. That is an issue that has to do with the physics of the brain. As far as the evidence available to us is concerned, computers cant have a will.

DT: The subtitle of the book is artificial intelligence without fear. What is the specific fear that you refer to?

BS: That was provoked by the literature on the singularity, which I know youre familiar with. Nick Bostrom, David Chalmers, Elon Musk, and the like. When we talked with our colleagues in the real world, it became clear to us that there was indeed a certain fear among the populace that AI would eventually take over and change the world to the detriment of humans.

We have quite a lot in the book about the Bostrum-type arguments. The core argument against them is that if the machine cannot have a will, then it also cannot have an evil will. Without an evil will, theres nothing to be afraid of. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But thats because the machines are being managed by people with evil ends. But then its not AI that is evil; its the people who build and program the AI

DT: Why does this notion of the singularity or artificial general intelligence interest people so much? Whether theyre scared by it or fascinated by it, theres something about this idea that resonates with people on a broad level.

JL: Theres this idea, started at the beginning of the 19th century and then declared by Nietzsche at the end of that century, that God is dead. Since the elites of our society are not Christians anymore, they needed a replacement. Max Stirner, who was, like Karl Marx, a pupil of Hegel, wrote a book about this, saying, I am my own god.

If you are God, you also want to be a creator. If you could create a superintelligence then you are like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We dont talk about this in the book, but that explains to me why this idea is so attractive in our times in which there is no transcendent entity anymore to turn to.

DT: Interesting. So to follow that through, its the idea that the creation of AI or the aim to create AI is a narcissistic act. In that case, the concept that these creations would somehow become more powerful than we are is a nightmarish twist on that. Its the child killing the parent.

JL: A bit like that, yes.

DT: What for you would be the ultimate outcome of your book if everyone was convinced by your arguments? What would that mean for the future of AI development?

JL: Its a very good question. I can tell you exactly what I think would happen and will happen. I think in the midterm people will accept our arguments, and this will create better-applied mathematics.

Something that all great mathematicians and physicists are completely aware of was the limitations of what they could achieve mathematically. Because they are aware of this, they focus only on certain problems. If you are well aware of the limitations, then you go through the world and look for these problems and solve them. Thats how Einstein found the equations for Brownian motion; how he came up with his theories of relativity; how Planck solved blackbody radiation and thus initiated the quantum theory of matter. They had a good instinct for which problems are amenable to solutions with mathematics and which are not.

If people learn the message of our book, they will, we believe, be able to engineer better systems, because they will concentrate on what is truly feasible and stop wasting money and effort on something that cant be achieved.

BS: I think that some of the message is already getting through, not because of what we say but because of the experiences people have when they give large amounts of money to AI projects, and then the AI projects fail. I guess you know about the Joint Artificial Intelligence Center. I cant remember the exact sum, but I think it was something like $10 billion, which they gave to a famous contractor. In the end, they got nothing out of it. They canceled the contract.

(Editors note: JAIC, a subdivision of the United States Armed Forces, was intended to accelerate the delivery and adoption of AI to achieve mission impact at scale. It was folded into a larger unified organization, the Chief Digital and Artificial Intelligence Officer, with two other offices in June this year. JAIC ceased to exist as its own entity.)

DT: What do you think, in high-level terms, is the single most compelling argument that you make in the book?

BS: Every AI system is mathematical in nature. Because we cannot model consciousness, will, or intelligence mathematically, these cannot be emulated using machines. Therefore, machines will not become intelligent, let alone superintelligent.

JL: The structure of our brain only allows limited models of nature. In physics, we pick a subset of reality that fits to our mathematical modeling capabilities. That is how Newton, Maxwell, Einstein, or Schrdinger obtained their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are those which we use to engineer technology. We are unable to create a complete mathematical model of animate nature.

This interview has been edited for length and clarity.

Read the original:

Why AI will never rule the world - Digital Trends

Posted in Superintelligence | Comments Off on Why AI will never rule the world – Digital Trends

Why DART Is the Most Important Mission Ever Launched to Space – Gizmodo Australia

Posted: at 7:42 am

Later today, NASAs DART spacecraft will attempt to smash into a non-threatening asteroid. Its one of the most important things weve done in space if not the most important thing as this experiment to deflect a non-threatening asteroid could eventually result in a robust and effective planetary defence strategy for protecting life on Earth.

Weve landed humans on the Moon, transported rovers to Mars, and sent spacecraft to interstellar space, yet nothing compares to what might happen today when NASAs DART spacecraft smashes into Dimorphos, the smaller member of the Didymos binary asteroid system. Should all go according to plan, DART will smash directly into the 160-metre wide asteroid at 9:14 a.m. AEST (watch it live here) and change the rocks speed by around 1%. Thats a small orbital adjustment for an asteroid, but a giant leap for humankind.

NASAs DART mission, short for Double Asteroid Redirection Test, wont mean that we suddenly have a defence against threatening asteroids, but it could demonstrate a viable strategy for steering dangerous asteroids away from Earth. Itll be many more years before our competency in this area fully matures, but it all starts today with DART.

At a NASA press briefing on September 22, Lindley Johnson, manager of NASAs Near-Earth Object Observations program, described DART as one of the most important missions in space history but also in the history of humankind. I wholeheartedly agree. Missions to the Moon, Mars, and Pluto are important and monumental in their own right, but this proof-of-concept experiment could literally lead to defensive measures against an existential threat. So yeah, pretty damned important.

The dino-extinguishing asteroid measured somewhere between 10-15 kilometres wide and was travelling around 13 km per second when it struck Mexicos Yucatan Peninsula some 66 million years ago. The collision wiped out 75% of all species on Earth, including every animal larger than a cat. And of course, it ended the 165-million-year reign of non-avian dinosaurs.

Asteroids of that size dont come around very often, but thats not to say our planet is immune from plus-sized space rocks. Recent research estimates that somewhere between 16 and 32 asteroids larger than 5 km wide strike Earth once every billion years. Thats about once every 30 million to 65 million years. That said, impacts with asteroids wider than 10 km are exceptionally rare, happening once every 250 million to 500 million years.

Despite the infrequency of these events, its the kind of impact that would wipe out our civilisation. Developing the means to defend ourselves is obviously a smart idea, but the threat of colossal asteroids isnt what keeps me up at night its the smaller ones that are much more likely to strike our planet.

The Southwest Research Institute says our atmosphere shreds most incoming asteroids smaller than 50 metres in diameter. Objects that reach the surface, including objects smaller than 2 km in size, can cause tremendous damage at local scales, such as wiping out an entire city or unleashing a catastrophic tsunami. As Johnson explained during the DART press briefing, asteroids the size of Dimorphos strike Earth about once every 1,000 years. The solar system is home about a million asteroid larger than 49.99 m wide. An estimated 2,000 near-Earth objects (NEOs) are larger than 2 km wide. Impacting asteroids at sizes around 2 km will produce severe environmental damage on a global scale, according to SWRI. And as noted, impacting asteroids wider than 10 km can induce mass extinctions.

NASA categorizes asteroids as being potentially hazardous if theyre 30 to 50 metres in diameter or larger and their orbit around the Sun brings them to within 8 million km of Earths orbit. The space agency works to detect and track these objects with ground- and space-based telescopes, and its Centre for Near Earth Object Studies keeps track of all known NEOs to assess potential impact risks.

As it stands, no known threat to Earth exists within the next 100 years. NASA is currently monitoring 28,000 NEOs, but astronomers detect around 3,000 each year. Theres a chance that a newly detected asteroid is on a collision course with Earth, in which case a DART-like mitigation would come in handy. But as Johnson explained, this type of scenario and our ensuing response wont likely resemble the way theyre depicted in Hollywood films, in which we typically have only a few days or months to react. More plausibly, wed have a few years or decades to mount a response, he said.

To protect our planet against these threats, Johnson pointed to two key strategies: detection and mitigation. NASAs upcoming Near-Earth Object Surveyor, or NEO Surveyor, will certainly help with detection, with the asteroid-hunting spacecraft expected to launch in 2026. DART is the first of hopefully many mitigation experiments to develop a planetary shield against hazardous objects.

DART is a test of a kinetic impactor, but scientists could develop a host of other strategies, such as using gravity tractors or nuclear devices, the latter of which could be surprisingly effective at least according to simulations. The type of technique employed will largely depend on factors having to do with the specific asteroid in question, such as its size and density. Kinetic impactors, for example, may be useless against so-called rubble pile asteroids, which feature loose conglomerations of surface material. Dimorphos is not expected to be a rubble pile, but we wont know until DART smashes into it. As Johnson said, planetary defence is applied planetary science.

A case can be made that space experiments to help us live off-planet are more important than asteroid deflection schemes. Indeed, we currently lack the ability to live anywhere other than Earth, which limits our ability to save ourselves from emerging existential risks, such as run-away global warming, malign artificial superintelligence, or molecular nanotechnology run amok.

Yes, its important that we strive to become a multi-planet species and not have all our eggs in one basket, but thats going to take a very long time for us to realise, while the threat of an incoming asteroid could emerge at any time. Wed best be ready to meet that sort of threat, while steadily developing our capacity to live off-planet.

More conceptually, the DART experiment is our introduction to solar system re-engineering. Subtly altering the orbit of a tiny asteroid is a puny first step, but our civilisation is poised to engage in more impactful interventions, as we re-architect our immediate celestial surroundings to make it safer or find better ways of exploiting all that our solar system has to offer. These more meaningful interventions, in addition to removing asteroid threats, could involve the geoengineering of planets and moons or even tweaking the Sun to make it last longer.

But Im getting a bit ahead of myself. First things first and fingers firmly crossed that DART will successfully smash into its unsuspecting target later today.

Link:

Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia

Posted in Superintelligence | Comments Off on Why DART Is the Most Important Mission Ever Launched to Space – Gizmodo Australia

‘Sweet Home Alabama’ turns 20: See how the cast has aged – Wonderwall

Posted: at 7:42 am

By Neia Balao 2:02am PDT, Sep 27, 2022

You can take the girl out of the honky tonk, but you can't take the honky tonk out of the girl! Believe it or not, it's been two decades since we were first introduced to and fell in love with Reese Witherspoon's adorable Southern belle-turned-New York City socialite Melanie Smooter (err Carmichael). To mark the romantic comedy's 20th anniversary on Sept. 27, 2022, Wonderwall.com is checking in on Reese and the film's other stars to see how they've aged and what they're up to all these years later!

Keep reading for more

RELATED: Celeb cheating scandals of 2022

By the time she starred in "Sweet Home Alabama," Hollywood darling Reese Witherspoon had already appeared in two buzzy and now-iconic films: "Cruel Intentions" and "Legally Blonde." In 2005 at 29, Reese achieved a career milestone when she earned the Academy Award for best actress for her performance as June Carter Cash in "Walk the Line." Under her production company Hello Sunshine, Reese has shifted her focus to television, having starred on and produced HBO's "Big Little Lies" (for which she earned an Emmy for outstanding limited series), Hulu's "Little Fires Everywhere" and Apple TV+'s "The Morning Show," on which she currently stars with Jennifer Aniston. After divorcing "Cruel Intentions" co-star Ryan Phillippe, with whom she has two kids who are now adults, Reese found love with talent agent Jim Toth. They married in 2011 and welcomed son Tennessee in 2012.

RELATED: Reese Witherspoon's life in photos

Josh Lucas played Jake Perry, Melanie's big first love and estranged husband who never left Pigeon Creek, Alabama.

Josh Lucas landed roles in a slew of flicks including "Hulk," "Poseidon" and "Life as We Know It." More recently, he appeared in the Oscar-winning sports drama "Ford v Ferrari" and "The Forever Purge." He's found success on the small screen too, including a stint on Paramount's neo-Western drama "Yellowstone." Josh was married to Jessica Ciencin Henriquez from 2012 to 2014. They share a son, Noah.

Patrick Dempsey portrayed Andrew Hennings, Melanie's super-handsome (and super-dreamy!) fianc in New York City.

Many of us know where Patrick Dempsey ended up: He played as McDreamy on "Grey's Anatomy" for 10 years. In addition to starring on the hit Shonda Rhimes series earning two SAG Awards and some Golden Globe nominations along the way Patrick also had leading man roles in films like "Made of Honor," "Valentine's Day," "Enchanted" and "Bridget Jones's Baby." Patrick, an auto racing enthusiast who's competed in a few races over the years, has been married to makeup artist Jillian Fink, with whom he shares three kids, since 1999.

Candice Bergen played Kate Hennings, the mayor of New York City who's Andrew's mother. She's extremely suspicious of Melanie and her intentions with her son.

Candice Bergen is no stranger to fame and critical acclaim! In fact, the actress had already earned Oscar and BAFTA nominations and won Golden Globes and Emmys long before she appeared in "Sweet Home Alabama." After her portrayal as the conniving New York City politician, the actress appeared on ABC's "Boston Legal" and a reboot of her hit series "Murphy Brown" as well as films like "Bride Wars," "The Meyerowitz Stories," "Book Club" and, more recently, "Let Them All Talk."

Nathan Lee Graham portrayed glamorous and sartorially savvy Frederick Montana, Melanie's fashion mentor and close friend. Rhona Mitra played one of Melanie's best friends in New York City, model Tabatha Wadmore-Smith.

Three years after "Sweet Home Alabama" came out, Nathan Lee Graham appeared in another great romantic comedy: "Hitch." In addition to starring on the HBO series "The Comeback," Nathan who's also a Broadway actor and Grammy winner landed a role on the short-lived "Riverdale" spinoff series "Katy Keene," had guest-starring stints on "Scrubs" and "Law & Order: Special Victims Unit" and reprised his "Zoolander" role in the 2016 sequel.

Rhona Mitra has mainly found success on the small screen, landing recurring roles on "The Practice," "Boston Legal," "Nip/Tuck" and "The Last Ship." She appeared in 2009's "Underworld: Rise of the Lycans" the film franchise's third installment. More recently, the British actress-model played Mercy Graves on The CW's "Supergirl."

Jean Smart took on the role of Jake's affectionate and caring mother, Stella Kay Perry.

What's Jean Smart up to these days? A lot, actually! The Tony-nominated five-time Emmy winner went on to appear on several TV shows like "24," "Samantha Who?," "Dirty John," "Watchmen" and "Mare of Easttown" plus a slew of films including "Garden State," "Life As We Know It," "A Simple Favor" and, more recently, "Superintelligence." In 2022, she took home the Emmy, Golden Globe and SAG Awards for best lead actress in a comedy series for her performance on "Hacks."

Ethan Embry played one of Melanie's closest childhood friends, Bobby Ray.

Ethan Embry already had fan-favorite '90s flicks "Empire Records," "Can't Hardly Wait" and "That Thing You Do!" under his belt by the time "Sweet Home Alabama" hit theaters. The California native has gone on to appear on the television shows "Brotherhood," "Once Upon a Time," "Sneaky Pete," "Grace and Frankie" and "Stargirl." In 2015, he remarried second wife Sunny Mabrey.

Mary Kay Place played Melanie's micromanaging mother, Pearl Smooter.

Before "Sweet Home Alabama," Mary Kay Place was best known for her work in films like "Being John Malkovich" and "Girl, Interrupted." After appearing in the Reese Witherspoon-led flick, Mary Kay starred on three buzzy HBO series "Big Love," "Bored to Death" and "Getting On" as well as shows like "Lady Dynamite," "Imposters" and "9-1-1: Lonestar." She's also continued to act in movies, popping up in "The Hollars," "Diane," "The Prom" and "Music" in recent years.

Fred Ward played Earl Smooter, Melanie's soft-spoken father.

Fred Ward, who was already an established actor by the time he landed his "Sweet Home Alabama" role, appeared in a handful of lesser known films before going on a brief acting hiatus in 2006. He made his return to the small screen with appearances on "ER" and "Grey's Anatomy." His last credited role came in 2015 on an episode of "True Detective." Fred died at 79 in May 2022.

Read more:

'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall

Posted in Superintelligence | Comments Off on ‘Sweet Home Alabama’ turns 20: See how the cast has aged – Wonderwall

Research Shows that Superintelligent AI is Impossible to be Controlled – Analytics India Magazine

Posted: September 24, 2022 at 8:52 pm

A group of researchers have come to the terrifying conclusion that containing super-intelligence AI may not be possible. They claim that controlling the AI would fall beyond human comprehension.

According to the Journal of Artificial Intelligence Research, in the paper titled, Superintelligence Cannot be Contained: Lessons from Computability Theory, researchers have argued that total containment (in principle) would be impossible due to fundamental limits inherent to computing. It further claims that it is mathematically impossible for humans to calculate an AIs plans, thereby making it uncontainable.

Sign up for your weekly dose of what's up in emerging technology.

The authors cite that implementing a rule for artificial intelligence to cause no harm to humans would not be an option if humans cannot predict the scenarios that an AI may come up with. They believe that while a computer system is working on an independent level, humans can no longer set limits. The teams reasoning was inspired in part by Alan Turings formulation of the halting problem in 1936. The problem centres on knowing whether a computer programme will reach a conclusion or an answer; either making it halt or simply loop forever trying to find one.

An excerpt of the paper reads, This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.

Computer scientist, Iyad Rahwan, Max-Planck Institute for Human Development, Germany said In effect, this makes the containment algorithm unusable. Meaning, machines perform certain important tasks independently, without the programmers fully understanding how they learned it.

However, alternatives have been suggested by the researchers on teaching AI some ethics. Limiting the potential of superintelligence could prevent AIs from annihilating the world, even if they remain unpredictable.

Link:

Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine

Posted in Superintelligence | Comments Off on Research Shows that Superintelligent AI is Impossible to be Controlled – Analytics India Magazine

Page 4«..3456..1020..»