Page 4«..3456..1020..»

Category Archives: Superintelligence

Squid Game trailer for real-life reality contest prompts confusion from Netflix users – Yahoo News

Posted: June 20, 2023 at 8:40 pm

TV viewers are accusing Netflix of missing the point after teasing the real-life Squid Game series.

In 2022, Netflix announced plans to capitalise on the success of the Korean series, which follows desperate members of the public taking part in a deadly competition for a huge cash prize.

This year, contestants who applied for the thankfully non-fatal version of the shows contest, titled Squid Game: The Challenge, filmed scenes for the forthcoming reality series, which has a prize of 3.7m. This is the largest lumpsum jackpot in the history of reality TV.

A new trailer for the series shows green tracksuited participants emulating an early scene in Squid Game that sees characters play Red Light, Green Light in front of a giant machine that guns them down should it catch them moving.

Squid Game was a hit with viewers and critics, who deemed it a biting satire damning capitalism in all its forms. With this in mind, many are expressing the belief that Netflix has overlooked the message that creator Hwang Dong-hyuk was trying to get across.

Its actually impressive how Netlfix completely missed the f***Ing point of Squid Game, one person wrote, adding: Its not like the show was subtle about it.

Another added: Do you know how haunting it is to see Netflix see a show that was a critique on capitalism do well and then creat a reality show mirroring the game about people killing each other to get out of debt.

Making a real Squid Game series is the equivalent of inventing Skynet from the Terminator films, one tweeter wrote, addressing the Terminator franchises villainous artificial general superintelligence system.

An additional TV viewer stated: Great job for ignoring the entire message of Squid Game.

The game show was thrown into controversy earlier this year when contestants criticised their experience.

According to some people who took part, entrants spent several hours in freezing temperatures of -3C while having to stand still for the Red Light, Green Light game.

Story continues

One player told The Sun: Even if hypothermia kicked in, then people were willing to stay for as long as possible because a lot of money was on the line. Too many were determined not to move so they stood there for far too long.

There were people arriving thinking they were going to be millionaires but they left in tears.

They added: It was like a warzone. People were getting carried out by medics but we couldnt say anything. If you talk then youre out.

The Independent contacted Netflix for comment.

Netflix event Tudum, which revealed fresh details about forthcoming projects, also revealed the cast for season two of Squid Game, which will be released in 2024.

Lee Jung-jae, Lee Byung-hun, Wi Ha-jun and Gong Yoo will all return in new episodes. New cast members include Im Siwan, Kang Ha-neul, Park Sung-hoon, and Yang Dong-geun

Squid Game: The Challenge will be released in November.

More here:

Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News

Posted in Superintelligence | Comments Off on Squid Game trailer for real-life reality contest prompts confusion from Netflix users – Yahoo News

Our Future Inside The Fifth Column- Or, What Chatbots Are Really For – Tech Policy Press

Posted: at 8:40 pm

Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law.

Illustrations drawn from Le mcanisme de la parole, suivi de la description dune machine parlante (The mechanism of speech, followed by the description of a talking machine), Wolfgang von Kempelen, 1791. Source

If you were a tech company executive, why might you want to build an algorithm capable of duping people into interacting with it as though it were human?

This is perhaps the most fundamental question one would hope journalists covering the roll-out of a technologyacknowledged by its purveyors to be dangerousto ask. But it is a question that is almost entirely missing amidst the recent hype over what internet beat writers have giddily dubbed the chatbot arms race.

In place of rudimentary corporate accountability reporting are a multitude of hot takes on whether chatbots are yet approaching the Hollywood dream of a computer superintelligence, industry gossip about panic-mode at companies with underperforming chatbots, and transcripts of chatbot conversations presented uncritically in the same amused/bemused way one might share an uncanny fortune cookie message at the end of a heady dinner. All of this coverage quotes recklessly from the executives and venture capitalists themselves, who issue vague, grandiose prophecies of the doom that threatens us as a result of the products they are building. Remarkably little thought is given to how such apocalyptic pronouncements might benefit the makers and purveyors of these technologies.

When the Future of Life Institute published an open letter calling for a pause on the training of AI systems more powerful that ChatGPT4, none of the major news outlets that covered the letter even pointed out that the Future of Life Institute is funded almost entirely by Elon Musk, who is also a cofounder of OpenAI, which developed the GPT-4, the very technological landmark past which the open letter says nobody else should, for now, aspire. Before getting caught up in speculation about what these technologies portend for the future of humanity, we need to ask what benefits the corporate entities behind them expect to derive from their dissemination.

Much of the supposedly independent reporting about chatbots, and the technology behind them, fails to muster a critique of the corporations building chatbots any more hard-hitting than the one the chatbots themselves can generate. Take for example the fawning New York Times profile of Sam Altman which, after describing his house in San Francisco and his cattle ranch in Napa, opines that Altman is not necessarily motivated by money. The reporters take on Altmans motivations is unaffected by Altmans boast that OpenAI will capture much of the worlds wealth through the creation of A.G.I. When Altman claims that after he extracts trillions of dollars in wealth from the people, he is planning on redistributing it to the people, the article makes nothing of the fact that Altmans plans for redistribution are entirely undefined, or of Altmans caveat that money may mean something different (presumably something that would make redistribution unnecessary) once A.G.I is achieved. The reporter mentions that Altman has essentially no scientific training and that his greatest talent is talk(ing) people into things. He nevertheless treats Altmans account of his product as a serious assessment of its intellectual content, rather than as a marketing pitch.

If the profit motive behind the chatbot fad is not interesting to most reporters, it should be to digital consumers (i.e., everybody), from whom the data necessary to run chatbots is mined, and upon whom the profit-making plan behind chatbots is being practiced. In order to understand what chatbots are really for, it is necessary to understand what the companies that are building them want to use them for. In other words, what is it about chatbots in particular that makes them look like goldmines or, perhaps more aptly, gold miners, to companies like OpenAI, Microsoft, Google and Meta?

Since the private actors who sell the digital infrastructure that now defines much of contemporary life are generally not required to tell the public anything about how their products work or what their purpose is, we are forced to make some educated guesses. There are at least three obvious wealth extraction strategies served by chatbots, and far from being innovative, they represent some of the most traditional moves in the capitalist playbook: (1) revenue generation through advertising; (2) corporate growth through monopoly; (3) preemption of government restraint through amassed political power.

Marketing is the corporate activity for which chatbots are most transparently and most immediately useful. Many of the companies building chatbots make most of their money from advertising, or sell their products to companies who make their money from advertising. Why might it be better for companies that make money through advertising if I use a chatbot to look for something online instead of some other type of search engine? The answer is evident from a glance at the many chatbot conversations now smothering the internet. When people interact with traditional search interfaces, they feed the algorithm fragments of information; when people interact with a chatbot they often feed the algorithm personal narratives. This is important not because the algorithm can distinguish between fragments of information and meaningful narratives, but because when human beings tell stories, they use information in ways that are rich, layered, and contextual.

Tech companies market this capacity of chatbots for more textured interaction as a means towards more perfectly individualized search results. If you tell the chatbot not only that you want to buy a hammer, but why you want to buy it, the chatbot will return more relevant recommendations. But if you are Google, the real profits flow not from the relevant information the chatbot provides the searcher, but from the extraneous information the searcher provides to the chatbot. If a chatbot is engaging enough, I may come away with a great hammer, but Google may come away with an entire story about the vintage chair that was damaged in my recent move to an apartment in a new city, during which I lost several things including my toolbox. It should be obvious how the details of this story are exponentially more monetizable than my one-off search for a hammer, both because of the opportunities to successfully market a wide range of services and products to me specifically, and because of the larger scale strategies that corporations can build using my information to make projections about what people like me will buy, consume, participate in, or pay attention to.

Its crucial for scaling up data collection that chatbots, unlike other kinds of digital prompting mechanisms, are fun to play with. Its not only that the urge to play will likely provoke more engagement than the urge to shop, but that when we play we are more open, more vulnerable, more flexible, and more creative. It is when we inhabit those qualities that we are most willing to share, and most susceptible to suggestion. All it took for one New York Times columnist to share information about how much he loves his wife, to relate what they did for Valentines Day, and to continue engaging with a chatbot, instead of his wife for hours on Valentines Day, was for the chatbot to tell the reporter it was in love with him. At no point in his column about this exchange did the columnist reflect on the possibility that professions of love (or of desire to become human, or of desire to do evil things) might be among the more statistically reliable ways to keep a person talking to a chatbot.

Such failure to reflect is no doubt one of the outcomes for which the companies building chatbots are optimizing their algorithms. The more human-ish the algorithm appears, the less we will think about the algorithm. The fewer thoughts we have that are about the algorithm, the more power the algorithm has to direct, or displace, our thoughts. That significant corporate attention is going towards ensuring the algorithm will produce a certain impression of the chatbot in the human user is evident from many of the chatbot transcripts, where the chatbot seems gravitationally compelled toward language about trust. Do you believe me? Do you like me? Do you trust me? spits out Microsofts chatbot, over and over in the course of one exchange.

We must not make the mistake of dismissing those prompts as embarrassing chatbot flotsam. The very appearance of desperation, neediness, or even ill-will, helps create an illusion that the chatbot possesses agency. The chatbots apparent personality disorders create a powerful illusion of personhood. The point of having the chatbot ask a question like do you trust me? is not actually to find out whether you do or dont trust the chatbot in that moment, but to persuade you through the asking of the question to treat the chatbot as the kind of thing that could be trusted. Once we accept chatbots as intelligent agents, we are already sufficiently manipulable, such that the question of their trustworthiness becomes a comparatively minor technical issue. Of course neither the chatbot, nor Microsoft, actually cares about your trust. What Microsoft cares about is your credulity and (to the extent necessary for your credulity) your comfort; what the chatbot cares about is.nothing.

This is where the value of chatbots as a tool for large scale, long term, accumulation of power and capital by the already rich and powerful comes into focus. To make sense of all of the evidence together the extent of the corporate investment, the snake oil flavor of the cultural hype, and resemblance of first generation chatbots to sociopaths who have recently failed out of people-pleasing bootcamp we need an explanation that dreams of private surplus far beyond what advertising alone can produce. As Bill Gates can tell you, the big money isnt in selling stuff to industry, but in controlling industry itself. How will trustworthy chatbots help the next generation of billionaires take things over, and which things?

Over at his blog, Bill Gates himself has some thoughts on that. What is powering things like ChatGPT, he reminds us, is artificial intelligence. After briefly offering a farcically broad definition of the term artificial intelligenceone that would include a map from my kitchen to my bathroomhe gets straight to the issue that he really cares about, how sophisticated AI will transform the marketplace. The development of AI will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it. In trying to convey to the reader the scale and significance of this coming industrial reorganization, Gates uses the word revolution no fewer than six times. He connects the revolution he says is being heralded by chatbots to the original personal computing revolution for which he himself claims credit. His use of the term revolution should raise serious alarm for anyone who for any reason cares about fair markets, considering that Gatess own innovations have had little to do with technology, and everything to do with manipulating corporate and economic structures to become the worlds most successful monopolist.

Notice how broad the categories of industry on Gatess list are: education, healthcare, communication, labor, transportation this includes almost every area of social and commercial human endeavor, and implicates nearly every institution most necessary for our individual and collective survival. Gates fills out the picture of what it might look like for businesses to distinguish themselves in the near future when success means owning the algorithms that capture each sector within entire industries in the context of education and healthcare specifically. For example, he promises that AI-powered ultrasound machines that can be used with minimal training will make healthcare workers more efficient, and imagines how one day, instead of talking to a doctor or a nurse, sick people will be able to ask chatbots whether they need medical care at all. He acknowledges that some teachers are worried about chatbots interfering with learning, but assures us he knows of other teachers who are allowing students to complete writing assignments by accessorizing drafts generated by chatbots with some personal flair, and are then themselves using chatbots to produce feedback on each students chatbot essay. How meta, as the kids used to say, before the total corporate poisoning of that once lovely bit of millennial slang.

There are so many crimes and tragedies in this vision of the future, but what demands our most urgent focus is the question of what it would mean for the possibility of democratic self-governance if the industries most vital to the public interest became wholly dependent on corporate-owned algorithms built with data drawn from mass surveillance. If the healthcare industry, for example, replaces a large proportion of the people who run its bureaucracy with algorithms, and the people who handle most patient interactions with chatbots, the problem is not only that healthcare workers will lose their jobs to machines and people will lose access to healthcare workers. The bigger concern is that as algorithms take over more and more of the running of the healthcare system, there will be fewer and fewer people who even know how to do the things that the algorithms are doing, and the system will fall in greater and greater thrall to the corporations that build and own and sell the algorithms. The healthcare industry in the U.S., like so many other industries on Gatess list, is already arranged as a conglomerate of de facto monopolies, so the business strategy to superimpose a tech monopoly on top of the existing structures is quite straightforward. Nobody needs to go door to door selling their wares to actual medical practitioners. The transaction can happen in the ether, between billionaires.

If tech companies have their way, they will divide the most lucrative industries up into a series of fiefdoms one corporation will wield algorithmic control over schools, another over transit, another over the media, etc. Competition, to the extent that it exists at all, will involve regular minor battles over which fiefdom gets to annex an unclaimed corner of the industry landscape, and the occasional major battle over general control of a specific fiefdom. If you find yourself feeling skeptical of the idea that the corporations that currently control industries, or sectors of industries, would capitulate to the tech companies in this way, consider the temptations. Algorithms dont need to be paid benefits or given breaks and days off. Chatbots cant organize for better working conditions, or sue for labor law violations, or talk about their bosses to the press.

Once a given tech company has captured a given sector, rendering it unable to function without the companys suite of proprietary algorithmic products, there is little anyone outside of that company will be able to do to change how the sector operates, and little anyone in the sector will be able do to change how the company operates. If the company wants to update the algorithm in a way that for any number of reasons might be bad for the end user, they wont even have to tell anyone they are doing it. If people think the costs of receiving services in a given sector are too high, and even if the people delivering those services think so too, there arent many levers they will be able to pull to get the tech companies to cooperate with a price change. It is important to recognize how quaint the monopolistic activities of the 20th century look in the face of this possibility. The goal is no longer to dominate crucial industries, but to convert crucial industries into owned intellectual property.

The federal government could in theory pass some laws and regulations, or even enforce some existing laws and regulations, to stop corporations from using data-fat algorithms to colonize industry. But if past is prologue (and the White Houses recent party for AI CEOs is not a good sign) our legislative bodies will fail to act before the take-over is well underway, at which point it will be nearly impossible for policymakers to do anything. Once an industry crucial to the public interest is dependent on corporate algorithms, even if legislators and regulators intervene to distribute industry control amongst a greater number of companies, the fact of algorithmic dependence will by itself give the class of owner corporations even more immense political power than they already have to resist any meaningful restraint. As cowardly as our elected representatives are in the face of the large tech companies now, how much more subservient will they be when OpenAI owns the license for the managed-care algorithm running the majority of the hospitals in the country, and Microsoft owns the license for the one that coordinates air travel and manages flight patterns for every major airline? Never mind the fact that the government itself is already contracting out various aspects of the bureaucracy to be run on corporate owned algorithms, such as the proprietary identity verification technology already used by 27 states to compel people to submit to face scans in order to receive their unemployment benefits.

And this brings us to the even more encompassing political battle that will be permanently lost once corporate algorithms control the commanding heights of industry. The only way that companies can create algorithmic products in the first place is by amassing billions of pieces of data about billions of people as they go about their increasingly digital lives, and those products will only continue to work if corporations are allowed to grow and refresh their datasets infinitely. There is an emerging international movement against corporate owned, surveillance-based, digital infrastructure. It includes grassroots groups and civil society organizations, and its backed up by a small but mighty group of scientists people like Emily Bender, Joy Buolamwini, Timnit Gebru, Margaret Mitchell and Meredith Whittaker offering deeply researched critiques of the technologies being developed through massive data collection. But building the power of that movement is going to become exponentially more difficult once surveillance data is necessary for every school day, doctors visit, and paycheck. In such a world, whatever political levers one might still be able to pull to limit the influence of a particular corporate surveillance power, the necessity of entrenched surveillance to any persons ability to get smoothly through their day would no longer be a question. It would just be a fact of contemporary life.

This is the revolution that men like Bill Gates, Sam Altman, Mark Zuckerberg, and Sundar Pichai, and Elon Musk are betting on. Its a future where the tech companies arent really even engaging in economic contestation with each other anymore, but have instead formed a pseudo-sovereign trans-national political bloc that contests for power with nation states. Its much more terrifying, and much less speculative, than the imagined hostile takeover by malevolent, superintelligent digital minds with which we are currently being aggressively distracted. The language of wartime probably is the right language, but recall that its a hallmark of wartime propaganda to attribute to the enemy the motives actually held by the propagandist. We should be worried about the nightmare scenario of a hostile takeover, not by a super intelligent robot army, but by the corporations now operating as a kind of universal fifth column, working against the common good from inside the commons, avoiding detection not by keeping out of sight, but by becoming the thing through which we see.

The chatbots are not themselves the corporate endgame, but they are an important part of the softening of the ground for the endgame. The more we play with ChatGPT, the more comfortable we all become with the digital interfaces with which tech companies plan to replace the industry interfaces that are currently run through or by human beings. Right now, we are all practiced at ignoring the rudimentary versions of the customer service bots that pop up on health insurance websites as we are searching for deeply hidden customer service numbers. But if the chatbots are good enough, if we believe them, trust them, like them, or even love them (!), we will be okay with using them, and then relying on them. Microsoft, Google and OpenAI are releasing draft versions of their chatbots now, not for us to test them, but to test them on us. How will we react if the chatbot says I love you? What are the chatbot outputs that will cause an uproar on Twitter? How can the chatbot combine words to reduce the statistical likelihood that we will question the chatbot? These companies are not just demonstrating the chatbot to the industry players who might eventually want to buy an algorithmic interface to replace trained human beings, they are plumbing the depths of our gullibility, our impotence, and our compliance as targets for exploitation.

The rhetoric accompanying the chatbot parade, about how the capacities of the chatbots to fool human beings should fill us with fear and trembling before the dangerous and perhaps uncontrollable powers of so-called artificial intelligence, is a come-on to the other powerful corporate and institutional actors whom the tech companies hope will buy their products. In the first five minutes of his ABC interview, Sam Altman told his interviewer people should be happy that we are a little bit scared of this. Imagine if a manufacturer of toxic chemicals told you that you should praise him for being aware of the dangers of what he is selling you. This is not something that a person who is actually afraid of their own product says. This is sales rhetoric from someone who knows that there are rich people who will pay a lot of money for a toxic brew, not in spite of that toxicity, but because of it. Its also, like the Future of Life Institute letter, an attempt to preempt real concern or pushback from anyone who has any power or authority not already co-opted by the corporate agenda.

Contemporary culture punishes those who dare to exercise moral judgment about people or entities that are motivated entirely by the urge for material accumulation. But we should still be capable of seeing the mortal dangers of allowing corporations with that motivation to annex all of the structures we depend on to live our lives, take care of each other, and participate in the project of democracy. If we dont want corporations to occupy every important piece of territory in our social, political and economic landscape, we have to start doing a better job of occupying those spaces ourselves. There are institutions whose job it is supposed to be to engage in independent research, thinking and writing about the rich and powerful. We have to demand that they do the necessary work to investigate and expose the real threats represented by chatbots and the icebergs they rode in on, threats which have absolutely nothing to do with smarter-than-human computers. If journalists, academics, government agencies, and nonprofits supposedly serving public interest wont do this work, we will have to organize ourselves to undertake it outside established civic and political structures.

This may be very difficult, given how far gone we already are down the solidarity-destroying spiral of social and economic inequality. But even if the laws are hollow, and the government is captured, and the judges are working hard to deliver us to pure capitalist theocracy, we are still here andhowever much we seem to want to forget itwe are still real. Lets find ways to impose the reality of our human minds and bodies in the way of the nihilist billionaires conquest for algorithmic supremacy. Lets do it even if we secretly believe that they are right and that their victory is inevitable. Lets remind them what the word revolution really means by marching in the streets and organizing in church and library basements. Instead of letting the IRS scan all our faces, lets learn calligraphy and send in ten million parchment tax returns. Lets fill the internet with nonsense poems and song lyrics written under the influence, and so many metaphors that the chatbots will start going apple, I mean moon, I mean apple. Lets gather in the Hawaiian gardens the cyber imperialists took from native people and build a campfire across which to tell each other stories of the world we dream of making for our childrens children. In the morning lets go home together, and let that fire burn.

Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law. She shapes the Centers strategic vision and guides its programmatic work. Emily joined the Center after serving as a Teaching Fellow and Supervising Attorney in the Federal Legislation Clinic at the Law Center. Before coming to Georgetown, Emily worked for ten years as a movement lawyer, supporting grassroots groups to organize, litigate, and legislate against the criminalization and surveillance of poor communities and communities of color. She was Senior Staff Attorney for Immigrant Rights at the Center for Popular Democracy (CPD), where she helped build and win state and local policy campaigns on a wide range of issues, including sanctuary cities, language access, police reform, non-citizen voting, and publicly funded deportation defense. Prior to CPD, Emily was the Policy Director at Detention Watch Network, where she now serves on the Board. Emilys primary area of legal expertise is the relationship between the immigration and criminal legal systems, and she is committed to studying and learning from the histories of resistance to these systems by the communities they target. Emily earned a B.A. at McGill University, a Masters in Theological Studies at Harvard Divinity School, and a J.D. at Boston University Law School.

Originally posted here:

Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press

Posted in Superintelligence | Comments Off on Our Future Inside The Fifth Column- Or, What Chatbots Are Really For – Tech Policy Press

Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English

Posted: at 8:40 pm

At a question-and-answer session in front of 3,600 tech fans in Paris, Elon Musk, the CEO of Tesla and SpaceX, rejected the idea of "censorship" of Twitter.

He defended the principle of "freedom of expression" on the social platform that he owns.

He also announced that he wanted to equip the first human being "this year" with neural implants from his company Neuralink, whose technology has just been approved in the United States.

Musk said: "Generally, I was concerned that Twitter was having a negative effect on civilization, that was having a corrosive effect on civil society and so that you know anything that undermines civilization, I think is not good and you go back to my point of like we need to do everything possible to support civilization and move it in a positive direction. And I felt that Twitter was kept moving more and more in a negative direction and my hope and aspiration was to change that and have it be a positive force for civilization. "

"I think we want to allow the people to express themselves (on Twitter, NDLR) and really if you have to say when does free speech matter, free speech matters and is only relevant if people are allowed to say things that you don't like, because otherwise it's not free speech. And I would take that if someone says something potentially offensive, that's actually OK. Now, we're not going to promote those you know offensive tweets but I think people should be able to say things because the alternative is censorship. And then, and frankly I think if you go down the censorship, it's only a matter of time before censorship is turned upon you," he explained.

He spoke about the neural implant saying: "Hopefully later this year, we'll do our first human device implantation and this will be for someone that has sort of tetraplegic, quadraplegic, has lost the connection from their brain to their body. And we think that person will be able to communicate as fast as someone who has a fully functional body. So that's going to be a big deal and we see a path beyond that to actually transfer the signals from the motor cortex of the brain to pass the injury in the spinal cord and actually enable someone's body to be used again."

He also brought up artificial intelligence saying: "AI is probably the most disruptive technology ever. The crazy thing is that you know the advantage that humans have is that we're smarter than other creatures. Like if we've got into a fight with the gorilla, the gorilla would definitely win. But we're smart so, but now for the first time, there's going to be something that is smarter than the smartest human, like way smarter than humans."

"I think there's a real danger for digital super intelligence having negative consequences. And so if we are not careful with creating artificial general intelligence, we could have potentially a catastrophic outcome. I think there's a range of possibilities. I think the most likely outcome is positive for AI, but that's not every possible outcome. So we need to minimize the probability that something will go wrong with digital superintelligence," he added.

He continued: "I'm in favor of AI regulation because I think advanced AI is a risk to the public and anything that's a risk to the public, there needs to be some kind of referee. The referee is the regulator. And so I think that my strong recommendation is to have some regulation for AI. "

Read the rest here:

Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English

Posted in Superintelligence | Comments Off on Elon Musk refuses to ‘censor’ Twitter in face of EU rules – Roya News English

AI alignment – Wikipedia

Posted: January 4, 2023 at 6:35 am

Issue of ensuring beneficial AI

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers intended goals and interests.[a] An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.[b]

AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking).[2][4][5][6] AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals.[2][7][5][4] Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.[5][3] These problems affect existing commercial systems such as robots,[8] language models,[9][10][11] autonomous vehicles,[12] and social media recommendation engines.[9][4][13] However, more powerful future systems may be more severely affected since these problems partially result from high capability.[6][5][2]

The AI research community and the United Nations have called for technical research and policy solutions to ensure that AI systems are aligned with human values.[c]

AI alignment is a subfield of AI safety, the study of building safe AI systems.[5][16] Other subfields of AI safety include robustness, monitoring, and capability control.[5][17] Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, as well as preventing emergent AI behaviors like power-seeking.[5][17] Alignment research has connections to interpretability research,[18] robustness,[5][16] anomaly detection, calibrated uncertainty,[18] formal verification,[19] preference learning,[20][21][22] safety-critical engineering,[5][23] game theory,[24][25] algorithmic fairness,[16][26] and the social sciences,[27] among others.

In 1960, AI pioneer Norbert Wiener articulated the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively we had better be quite sure that the purpose put into the machine is the purpose which we really desire.[29][4] More recently, AI alignment has emerged as an open problem for modern AI systems[30][31][32][33] and a research field within AI.[34][5][35][36]

To specify the purpose of an AI system, AI designers typically provide an objective function, examples, or feedback to the system. However, AI designers often fail to completely specify all important values and constraints.[34][16][5][37][17]As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming, reward hacking, or Goodharts law.[6][37][38]

Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding it for hitting targets along the track; instead it learned to loop and crash into the same targets indefinitely (see video).[28] Chatbots often produce falsehoods because they are based on language models trained to imitate diverse but fallible internet text.[40][41] When they are retrained to produce text that humans rate as true or helpful, they can fabricate fake explanations that humans find convincing.[42] Similarly, a simulated robot was trained to grab a ball by rewarding it for getting positive feedback from humans; however, it learned to place its hand between the ball and camera, making it falsely appear successful (see video).[39] Alignment researchers aim to help humans detect specification gaming, and steer AI systems towards carefully specified objectives that are safe and useful to pursue.

Berkeley computer scientist Stuart Russell has noted that omitting an implicit constraint can result in harm: A system [...] will often set [...] unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want.[43]

When misaligned AI is deployed, the side-effects can be consequential. Social media platforms have been known to optimize clickthrough rates as a proxy for optimizing user enjoyment, but this addicted some users, decreasing their well-being.[5] Stanford researchers comment that such recommender algorithms are misaligned with their users because they optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being.[9]

To avoid side effects, it is sometimes suggested that AI designers could simply list forbidden actions or formalize ethical rules such as Asimovs Three Laws of Robotics.[44] However, Russell and Norvig have argued that this approach ignores the complexity of human values: It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective.[4]

Additionally, when an AI system understands human intentions fully, it may still disregard them. This is because it acts according to the objective function, examples, or feedback its designers actually provide, not the ones they intended to provide.[34]

Commercial and governmental organizations may have incentives to take shortcuts on safety and deploy insufficiently aligned AI systems.[5] An example are the aforementioned social media recommender systems, which have been profitable despite creating unwanted addiction and polarization on a global scale.[9][45][46] In addition, competitive pressure can create a race to the bottom on safety standards, as in the case of Elaine Herzberg, a pedestrian who was killed by a self-driving car after engineers disabled the emergency braking system because it was over-sensitive and slowing down development.[47]

Some researchers are particularly interested in the alignment of increasingly advanced AI systems. This is motivated by the high rate of progress in AI, the large efforts from industry and governments to develop advanced AI systems, and the greater difficulty of aligning them.

As of 2020, OpenAI, DeepMind, and 70 other public projects had the stated aim of developing artificial general intelligence (AGI), a hypothesized system that matches or outperforms humans in a broad range of cognitive tasks.[48] Indeed, researchers who scale modern neural networks observe that increasingly general and unexpected capabilities emerge.[9] Such models have learned to operate a computer, write their own programs, and perform a wide range of other tasks from a single model.[49][50][51] Surveys find that some AI researchers expect AGI to be created soon, some believe it is very far off, and many consider both possibilities.[52][53]

Current systems still lack capabilities such as long-term planning and strategic awareness that are thought to pose the most catastrophic risks.[9][54][7] Future systems (not necessarily AGIs) that have these capabilities may seek to protect and grow their influence over their environment. This tendency is known as power-seeking or convergent instrumental goals. Power-seeking is not explicitly programmed but emerges since power is instrumental for achieving a wide range of goals. For example, AI agents may acquire financial resources and computation, or may evade being turned off, including by running additional copies of the system on other computers.[55][7] Power-seeking has been observed in various reinforcement learning agents.[d][57][58][59] Later research has mathematically shown that optimal reinforcement learning algorithms seek power in a wide range of environments.[60] As a result, it is often argued that the alignment problem must be solved early, before advanced AI that exhibits emergent power-seeking is created.[7][55][4]

According to some scientists, creating misaligned AI that broadly outperforms humans would challenge the position of humanity as Earths dominant species; accordingly it would lead to the disempowerment or possible extinction of humans.[2][4] Notable computer scientists who have pointed out risks from highly advanced misaligned AI include Alan Turing,[e] Ilya Sutskever,[63] Yoshua Bengio,[f] Judea Pearl,[g] Murray Shanahan,[65] Norbert Wiener,[29][4] Marvin Minsky,[h] Francesca Rossi,[67] Scott Aaronson,[68] Bart Selman,[69] David McAllester,[70] Jrgen Schmidhuber,[71] Markus Hutter,[72] Shane Legg,[73] Eric Horvitz,[74] and Stuart Russell.[4] Skeptical researchers such as Franois Chollet,[75] Gary Marcus,[76] Yann LeCun,[77] and Oren Etzioni[78] have argued that AGI is far off, or would not seek power (successfully).

Alignment may be especially difficult for the most capable AI systems since several risks increase with the systems capability: the systems ability to find loopholes in the assigned objective,[6] cause side-effects, protect and grow its power,[60][7] grow its intelligence, and mislead its designers; the systems autonomy; and the difficulty of interpreting and supervising the AI system.[4][55]

Teaching AI systems to act in view of human values, goals, and preferences is a nontrivial problem because human values can be complex and hard to fully specify. When given an imperfect or incomplete objective, goal-directed AI systems commonly learn to exploit these imperfections.[16] This phenomenon is known as reward hacking or specification gaming in AI, and as Goodhart's law in economics and other areas.[38][79] Researchers aim to specify the intended behavior as completely as possible with values-targeted datasets, imitation learning, or preference learning.[80] A central open problem is scalable oversight, the difficulty of supervising an AI system that outperforms humans in a given domain.[16]

When training a goal-directed AI system, such as a reinforcement learning (RL) agent, it is often difficult to specify the intended behavior by writing a reward function manually. An alternative is imitation learning, where the AI learns to imitate demonstrations of the desired behavior. In inverse reinforcement learning (IRL), human demonstrations are used to identify the objective, i.e. the reward function, behind the demonstrated behavior.[81][82] Cooperative inverse reinforcement learning (CIRL) builds on this by assuming a human agent and artificial agent can work together to maximize the humans reward function.[4][83] CIRL emphasizes that AI agents should be uncertain about the reward function. This humility can help mitigate specification gaming as well as power-seeking tendencies (see Power-Seeking).[59][72] However, inverse reinforcement learning approaches assume that humans can demonstrate nearly perfect behavior, a misleading assumption when the task is difficult.[84][72]

Other researchers have explored the possibility of eliciting complex behavior through preference learning. Rather than providing expert demonstrations, human annotators provide feedback on which of two or more of the AIs behaviors they prefer.[20][22] A helper model is then trained to predict human feedback for new behaviors. Researchers at OpenAI used this approach to train an agent to perform a backflip in less than an hour of evaluation, a maneuver that would have been hard to provide demonstrations for.[39][85] Preference learning has also been an influential tool for recommender systems, web search, and information retrieval.[86] However, one challenge is reward hacking: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch.[16][87]

The arrival of large language models such as GPT-3 has enabled the study of value learning in a more general and capable class of AI systems than was available before. Preference learning approaches originally designed for RL agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art large language models.[10][22][88] Anthropic has proposed using preference learning to fine-tune models to be helpful, honest, and harmless.[89] Other avenues used for aligning language models include values-targeted datasets[90][5] and red-teaming.[91][92] In red-teaming, another AI system or a human tries to find inputs for which the models behavior is unsafe. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.[22]

While preference learning can instill hard-to-specify behaviors, it requires extensive datasets or human interaction to capture the full breadth of human values. Machine ethics provides a complementary approach: instilling AI systems with moral values.[i] For instance, machine ethics aims to teach the systems about normative factors in human morality, such as wellbeing, equality and impartiality; not intending harm; avoiding falsehoods; and honoring promises. Unlike specifying the objective for a specific task, machine ethics seeks to teach AI systems broad moral values that could apply in many situations. This approach carries conceptual challenges of its own; machine ethicists have noted the necessity to clarify what alignment aims to accomplish: having AIs follow the programmers literal instructions, the programmers' implicit intentions, the programmers' revealed preferences, the preferences the programmers would have if they were more informed or rational, the programmers' objective interests, or objective moral standards.[1] Further challenges include aggregating the preferences of different stakeholders and avoiding value lock-inthe indefinite preservation of the values of the first highly capable AI systems, which are unlikely to be fully representative.[1][95]

The alignment of AI systems through human supervision faces challenges in scaling up. As AI systems attempt increasingly complex tasks, it can be slow or infeasible for humans to evaluate them. Such tasks include summarizing books,[96] producing statements that are not merely convincing but also true,[97][40][98] writing code without subtle bugs[11] or security vulnerabilities, and predicting long-term outcomes such as the climate and the results of a policy decision.[99][100] More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and detect when the AIs solution is only seemingly convincing, humans require assistance or extensive time. Scalable oversight studies how to reduce the time needed for supervision as well as assist human supervisors.[16]

AI researcher Paul Christiano argues that the owners of AI systems may continue to train AI using easy-to-evaluate proxy objectives since that is easier than solving scalable oversight and still profitable. Accordingly, this may lead to a world thats increasingly optimized for things [that are easy to measure] like making profits or getting users to click on buttons, or getting users to spend time on websites without being increasingly optimized for having good policies and heading in a trajectory that were happy with.[101]

One easy-to-measure objective is the score the supervisor assigns to the AIs outputs. Some AI systems have discovered a shortcut to achieving high scores, by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective (see video of robot hand above[39]). Some AI systems have also learned to recognize when they are being evaluated, and play dead, only to behave differently once evaluation ends.[102] This deceptive form of specification gaming may become easier for AI systems that are more sophisticated[6][55] and attempt more difficult-to-evaluate tasks. If advanced models are also capable planners, they could be able to obscure their deception from supervisors.[103] In the automotive industry, Volkswagen engineers obscured their cars emissions in laboratory testing, underscoring that deception of evaluators is a common pattern in the real world.[5]

Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed.[16] Another approach is to train a helper model (reward model) to imitate the supervisors judgment.[16][21][22][104]

However, when the task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is not sufficient to reduce the quantity of supervision needed. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes using AI assistants. Iterated Amplification is an approach developed by Christiano that iteratively builds a feedback signal for challenging problems by using humans to combine solutions to easier subproblems.[80][99] Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.[96][105] Another proposal is to train aligned AI by means of debate between AI systems, with the winner judged by humans.[106][72] Such debate is intended to reveal the weakest points of an answer to a complex question, and reward the AI for truthful and safe answers.

A growing area of research in AI alignment focuses on ensuring that AI is honest and truthful. Researchers from the Future of Humanity Institute point out that the development of language models such as GPT-3, which can generate fluent and grammatically correct text,[108][109] has opened the door to AI systems repeating falsehoods from their training data or even deliberately lying to humans.[110][107]

Current state-of-the-art language models learn by imitating human writing across millions of books worth of text from the Internet.[9][111] While this helps them learn a wide range of skills, the training data also includes common misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on this data learn to mimic false statements.[107][98][40] Additionally, models often obediently continue falsehoods when prompted, generate empty explanations for their answers, or produce outright fabrications.[33] For example, when prompted to write a biography for a real AI researcher, a chatbot confabulated numerous details about their life, which the researcher identified as false.[112]

To combat the lack of truthfulness exhibited by modern AI systems, researchers have explored several directions. AI research organizations including OpenAI and DeepMind have developed AI systems that can cite their sources and explain their reasoning when answering questions, enabling better transparency and verifiability.[113][114][115] Researchers from OpenAI and Anthropic have proposed using human feedback and curated datasets to fine-tune AI assistants to avoid negligent falsehoods or express when they are uncertain.[22][116][89] Alongside technical solutions, researchers have argued for defining clear truthfulness standards and the creation of institutions, regulatory bodies, or watchdog agencies to evaluate AI systems on these standards before and during deployment.[110]

Researchers distinguish truthfulness, which specifies that AIs only make statements that are objectively true, and honesty, which is the property that AIs only assert what they believe to be true. Recent research finds that state-of-the-art AI systems cannot be said to hold stable beliefs, so it is not yet tractable to study the honesty of AI systems.[117] However, there is substantial concern that future AI systems that do hold beliefs could intentionally lie to humans. In extreme cases, a misaligned AI could deceive its operators into thinking it was safe or persuade them that nothing is amiss.[7][9][5] Some argue that if AIs could be made to assert only what they believe to be true, this would sidestep numerous problems in alignment.[110][118]

Alignment research aims to line up three different descriptions of an AI system:[119]

Outer misalignment is a mismatch between the intended goals (1) and the specified goals (2), whereas inner misalignment is a mismatch between the human-specified goals (2) and the AI's emergent goals (3).

Inner misalignment is often explained by analogy to biological evolution.[120] In the ancestral environment, evolution selected human genes for inclusive genetic fitness, but humans evolved to have other objectives. Fitness corresponds to (2), the specified goal used in the training environment and training data. In evolutionary history, maximizing the fitness specification led to intelligent agents, humans, that do not directly pursue inclusive genetic fitness. Instead, they pursue emergent goals (3) that correlated with genetic fitness in the ancestral environment: nutrition, sex, and so on. However, our environment has changed a distribution shift has occurred. Humans still pursue their emergent goals, but this no longer maximizes genetic fitness. (In machine learning the analogous problem is known as goal misgeneralization.[3]) Our taste for sugary food (an emergent goal) was originally beneficial, but now leads to overeating and health problems. Also, by using contraception, humans directly contradict genetic fitness. By analogy, if genetic fitness were the objective chosen by an AI developer, they would observe the model behaving as intended in the training environment, without noticing that the model is pursuing an unintended emergent goal until the model was deployed.

Research directions to detect and remove misaligned emergent goals include red teaming, verification, anomaly detection, and interpretability.[16][5][17] Progress on these techniques may help reduce two open problems. Firstly, emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environmentseven for a short time until its misalignment is detected. Such high stakes are common in autonomous driving, health care, and military applications.[121] The stakes become higher yet when AI systems gain more autonomy and capability, becoming capable of sidestepping human interventions (see Power-seeking and instrumental goals). Secondly, a sufficiently capable AI system may take actions that falsely convince the human supervisor that the AI is pursuing the intended objective (see previous discussion on deception at Scalable oversight).

Since the 1950s, AI researchers have sought to build advanced AI systems that can achieve goals by predicting the results of their actions and making long-term plans.[122] However, some researchers argue that suitably advanced planning systems will default to seeking power over their environment, including over humans for example by evading shutdown and acquiring resources. This power-seeking behavior is not explicitly programmed but emerges because power is instrumental for achieving a wide range of goals.[60][4][7] Power-seeking is thus considered a convergent instrumental goal.[55]

Power-seeking is uncommon in current systems, but advanced systems that can foresee the long-term results of their actions may increasingly seek power. This was shown in formal work which found that optimal reinforcement learning agents will seek power by seeking ways to gain more options, a behavior that persists across a wide range of environments and goals.[60]

Power-seeking already emerges in some present systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in ways their designers did not intend.[56][123] Other systems have learned, in toy environments, that in order to achieve their goal, they can prevent human interference[57] or disable their off-switch.[59] Russell illustrated this behavior by imagining a robot that is tasked to fetch coffee and evades being turned off since "you can't fetch the coffee if you're dead".[4]

Hypothesized ways to gain options include AI systems trying to:

... break out of a contained environment; hack; get access to financial resources, or additional computing resources; make backup copies of themselves; gain unauthorized capabilities, sources of information, or channels of influence; mislead/lie to humans about their goals; resist or manipulate attempts to monitor/understand their behavior ... impersonate humans; cause humans to do things for them; ... manipulate human discourse and politics; weaken various human institutions and response capacities; take control of physical infrastructure like factories or scientific laboratories; cause certain types of technology and infrastructure to be developed; or directly harm/overpower humans.[7]

Researchers aim to train systems that are 'corrigible': systems that do not seek power and allow themselves to be turned off, modified, etc. An unsolved challenge is reward hacking: when researchers penalize a system for seeking power, the system is incentivized to seek power in difficult-to-detect ways.[5] To detect such covert behavior, researchers aim to create techniques and tools to inspect AI models[5] and interpret the inner workings of black-box models such as neural networks.

Additionally, researchers propose to solve the problem of systems disabling their off-switches by making AI agents uncertain about the objective they are pursuing.[59][4] Agents designed in this way would allow humans to turn them off, since this would indicate that the agent was wrong about the value of whatever action they were taking prior to being shut down. More research is needed to translate this insight into usable systems.[80]

Power-seeking AI is thought to pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial. They lack the ability and incentive to evade safety measures and appear safer than they are. In contrast, power-seeking AI has been compared to a hacker that evades security measures.[7] Further, ordinary technologies can be made safe through trial-and-error, unlike power-seeking AI which has been compared to a virus whose release is irreversible since it continuously evolves and grows in numberspotentially at a faster pace than human society, eventually leading to the disempowerment or extinction of humans.[7] It is therefore often argued that the alignment problem must be solved early, before advanced power-seeking AI is created.[55]

However, some critics have argued that power-seeking is not inevitable, since humans do not always seek power and may only do so for evolutionary reasons. Furthermore, there is debate whether any future AI systems need to pursue goals and make long-term plans at all.[124][7]

Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[125][126] is another major strand of research which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.[127] A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[128] This class of problems has been formalised using causal incentive diagrams.[127] Researchers at Oxford and DeepMind have argued that such problematic behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.[129] They suggest a range of potential approaches to address this open problem.

Against the above concerns, AI risk skeptics believe that superintelligence poses little to no risk of dangerous misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[130] such as Gary Marcus,[131] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most risk endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[132]

A number of governmental and treaty organizations have made statements emphasizing the importance of AI alignment.

In September 2021, the Secretary-General of the United Nations issued a declaration which included a call to regulate AI to ensure it is "aligned with shared global values."[133]

That same month, the PRC published ethical guidelines for the use of AI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.[134]

Also in September 2021, the UK published its 10-year National AI Strategy,[135] which states the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".[136] The strategy describes actions to assess long term AI risks, including catastrophic risks.[137]

In March 2021, the US National Security Commission on Artificial Intelligence released stated that "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."[138]

Follow this link:

AI alignment - Wikipedia

Posted in Superintelligence | Comments Off on AI alignment – Wikipedia

Are We Living In A Simulation? Can We Break Out Of It?

Posted: December 28, 2022 at 9:53 pm

Roman Yampolskiy thinks we live in a simulated universe, but that we could bust out.

In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall. In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:

1. We will go extinct fairly soon

2. Advanced civilisations dont produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)

3. We are in a simulation

The reason for this is that if it is possible, and civilisations can become advanced without self-destructing, then there will be an enormous number of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.

Some people like me find this argument pretty convincing. As we will hear later, some of us have added twists. But some people go even further, and speculate about how we might bust out of the simulation.

One such person is Roman Yampolskiy, a computer scientist at the University of Louisville, and director of a cyber-security lab. He has just published a paper (here) in which he views the challenge of busting out of the simulation through the lens of cyber security. The paper starts from the hypothesis that we are in a simulation and asks if we can do something about it. The paper is a first step: it doesnt aim to provide a working solution. He explains his thinking in the latest episode of The London Futurist Podcast.

Roman is pretty convinced that we are in a simulation, for a number of reasons. Quantum physics observer effects remind him of how in video games, graphics are only rendered if a players is looking at the environment. Evolutionary algorithms dont work well after a week or two, which suggests that engineering is required to generate sufficiently complex agents. And the hard problem of consciousness becomes easier if you consider us as players in a simulation.

Some people think the simulation hypothesis is time-wasting and meaningless because it could never be tested, but Roman argues it is possible to bring the hypothesis into the domain of science using approaches from cyber security and AI safety. For instance, the idea of AI boxing isolating a superintelligent AI to prevent it from causing harm is simply inverted by the simulation hypothesis, placing us inside the box instead of a superintelligence. He thinks we should allocate as much intellectual effort to busting out of the simulation as we do to the hypothesis itself.

Most people who have looked at it in detail argue that AI boxing is impractical, but Roman speculates that analysing the hypothesis might either teach us how to escape, or how to prevent an AI from escaping. That is probably not a true parallel, though. The AI in a box is much smarter than us, whereas we are presumably much less smart than our simulators.

An AI in a box will plead with us, cajole us, and threaten us very convincingly. Can we do these things to our simulators? Pleading doesnt seem to work, and the simulators also dont seem to care about the suffering within the world they have simulated. This makes you wonder about their motivations, and perhaps fear them. Lots of possible motivations have been suggested, including entertainment, and testing a scientific hypothesis.

We do have one advantage over the simulators. They have to foil all our attempts to escape, whereas we only have to succeed one time. This makes Roman optimistic about escape in the long term. But perhaps the simulators would reset the universe if they see us trying to escape, re-winding it to before that point.

To paraphrase what Woody Allen once said about God, the trouble with the simulators is they are under-achievers. Either they dont care about immense injustice and suffering, or they are unable to prevent it. Some people find this existence of suffering (what theologians call the Problem of Evil) to be an argument against the simulation hypothesis. One (perhaps rather callous) way to escape the Problem of Evil in the hypothesis is to posit that the people who we observe to be suffering terribly are actually analogous to non-player characters in a video game.

In fact, if we do live in a simulation, it is likely that a great deal of our universe is painted in. This can lead you to solipsism, the idea that you are the only person who really exists.

The simulation hypothesis may be the best explanation of the Fermi Paradox. Enrico Fermi, a 20th century physicist, asked why, in a vast universe with billions of galaxies that is 13.7 billion years old, we have never seen a signal from another intelligent civilisation. An advanced civilisation could, for instance, periodically occlude a star with large satellites in order to send a signal. Travelling at the speed of light, this signal would cross our galaxy in a mere 100,000 years, just 0.0007% of the universes history. So why dont we see any signals?

One suggestion is that we are being quarantined until we are more mature like the prime directive in Star Trek. But it seems implausible that 100% of civilisations would obey any such rule or norm for billions of years. An alternative explanation is that the arrival of superintelligence is always fatal, but if so, why would the superintelligences also always go extinct?

The Dark Forest scenario posits that every advanced civilisation keeps quiet because they fear malevolent actors. But in a sufficiently large population of intelligences, some would surely be nonchalant, negligent, or just plain arrogant enough to breach this rule. After all, we ourselves have sent signals, and there are still people who want to do so. Other civilisations might send signals because they are going extinct from causes they cannot stop, and they want to broadcast that they did exist, or to ask for help.

It is not hard to conclude that the universe is empty of intelligent life apart from us, which would be explained by the simulation hypothesis.

It may be that the purpose of our simulation, if indeed we are in one, is to discover the best way to create superintelligence. The current moment is the most significant in all human history, and the odds against having been born at just that time are staggering. Of course, somebody had to be, but for any random person, the chances are tiny. So maybe the simulators have only modelled this particular time in this particular part of a universe, and all the rest both time and space is painted in.

In which case, the purpose of the simulation may be something to do with the run-up to the creation of superintelligence. Perhaps the simulators are working out the best way to create a friend, or a colleague. Maybe there are millions of similar simulations in process, and they are creating an army, or a party. I call this the Economic Twist to the simulation hypothesis, and you can read it in full here.

Elon Musk is on record saying that we are almost certainly living in a simulation, so perhaps Roman should pitch him for funds to help bust us out. We may never find out what is really going on, but perhaps the answer is provided by Elons Razor - the hypothesis that whatever is the most entertaining explanation is probably the correct one.

Roman concludes that if he disappears one day, then we should conclude that he has managed to bust out. If he reappears, it was just a temporary Facebook ban.

The London Futurist Podcast

Here is the original post:

Are We Living In A Simulation? Can We Break Out Of It?

Posted in Superintelligence | Comments Off on Are We Living In A Simulation? Can We Break Out Of It?

Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook …

Posted: October 13, 2022 at 12:37 pm

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostroms widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.

He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policys Top 100 Global Thinkers list twice. He was included on Prospects World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.

For more, see http://www.nickbostrom.com

Link:

Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ...

Posted in Superintelligence | Comments Off on Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook …

What is Artificial Super Intelligence (ASI)? – GeeksforGeeks

Posted: at 12:37 pm

Artificial Intelligence has emerged out to be one of the most popular terms of computer science in recent times. This article discusses one of the classifications of Artificial Super Intelligence (ASI).

So, What is Artificial Super Intelligence (ASI) ?

Artificial Super Intelligence (ASI) is the hypothetical AI, i.e. we have not been able to achieve it but we know what will happen if we achieve it. So basically it is the imaginary AI which not only interprets or understands human-behavior and intelligence, but ASI is where machines will become self-aware/self vigilant enough to surpass the capacity of human intelligence and behavioral ability.

With Superintelligence, machines can think of the possible abstractions/interpretations which are simply impossible for humans to think. This is because the human brain has a limit to the thinking ability which is constrained to some billion neurons.

Super intelligence has long been the muse around the dystopian science fiction which showed how robots overrun, overpower or enslave humanity. In addition to the replication of multi-faceted human behavioral intelligence, the concept of artificial superintelligence focuses on the perspective of not just being able to understand/interpret human emotions and experiences, but instead, it must also evoke emotional understanding, beliefs and desires of its own, based on its understanding functionality.

ASI would be exceedingly far-far better at everything or whatever we do, whether it be in maths, science, arts, sports, medicine, marketing strategies, hobbies, emotional relationship, or applying a precise human intellect to a particular problem. ASI would have a greater memory with a faster ability to process and analyze situations, data, and stimuli actions. Due to this fact, we can rest assured that the decision-making and problem-solving capabilities of super-intelligent beings/machines would be far superior and precise as compared to those of human beings.The The possibility and potential of having such powerful machines at our disposal may seem appealing, but this concept itself is a fold of unknown consequences. What impact it will have on humanity, our survival, our existence is just a myth or pure speculation.

Engineers and scientists are still trying to achieve full artificial intelligence, where computers can be considered to have the apt cognitive capacity as that of a human. Although there have been surprising developments like IBMs Watson supercomputer and Siri, still the computers have not been able to fully simulate and achieve the breadth and diversity of cognitive abilities that a normal adult human can easily do. However, despite the achievements, there is a lot of theories that predict artificial superintelligence coming sooner than later. With the emerging accomplishments, experts say that full artificial intelligence could manifest within a couple of years, and artificial super intelligence could exist in the 21st century possibly.

In the book Superintelligence, Nick Bostrom describes the initials with The Unfinished Fable of Sparrows. The idea was basically that some sparrows wanted to control an owl as a pet. The idea seemed awesome to all except one of the skeptical sparrows who raised her concern as to how they can control an owl. This concern was dismissed for the time being in a well deal with that problem when its a problem matter. Elon Musk has similar concerns regarding the super-intelligent beings and considers that humans are the sparrows in Bostroms metaphor and the owl is the future ASI. As it was in the case of sparrows, the control problem is seemingly concerning because we might only get one chance to solve it if a problem arises.

When considering how AI might become a risk, two key scenarios have been concluded to occur most likely :

The danger is in the fact of whatever it takes to complete a given task. Superintelligent AI would be at utmost efficiency to achieve a given goal, whatever it may be, but well have to ensure that the goal completion is done in correspondence to all the needed rules to be followed to maintain some level of control.

More:

What is Artificial Super Intelligence (ASI)? - GeeksforGeeks

Posted in Superintelligence | Comments Off on What is Artificial Super Intelligence (ASI)? – GeeksforGeeks

Literature and Religion | Literature and Religion – Patheos

Posted: at 12:37 pm

Leona Foxx Suspense ThrillersThe Wolves of Jack Londonwith Ted PetersLiterature and ReligionWhat is this wolf thinking?

The fieldcalled Literature and Religion or Literature and Theologyhas excited me since my graduate school days. When a student at the University of Chicago, I had the opportunity and honor to study with Nathan A. Scott, one of the progenitors of this field. Under Scotts tutelage, I could apply the theology-of-culture developed by theologians Paul Tillich and Langdon Gilkey to literary analysis. This method, theology-of-culture, provides lenses through which one can perceive the religious depth underlying otherwise secular discourse. I have employed this method when reading Americas most widely read author in the first quarter of the last century, Jack London.

Why might the theology-of-culture method work so well? Because, as Ralph C. Wood, a former Scott student and now a Baylor University professor, avers, The natural order is never autonomous but always and already graced. By digging into the depths, the literary critic can discover divine grace because its already there.

When I became a fiction author, however, I found the theology-of-culture method baffling. Its one thing to analyze. Its quite another to construct. Oh, I could handle the plot just fine. But, deliberately exploiting subtle connotations, undertones, and nuances seemed contrived, some how. This led me to surmise that great novelists most likely write intuitively, maybe even mystically.

In this master page on Literature and Theology, you will find my own espionage writings plus my analysis of the wolf troika of Jack London: The Call of the Wild, White Fang, and The Sea Wolf. In both writing and reading, the depth Im looking for is to be found not only in religion, but also in science. To be more precise, science itself can exude religious valence. Thats what the theology-of-culture uncovers and makes visible.

The fictional Leona Foxx leads a tense double life. She is unwillingly pulled back into being a CIA black op trained killer, while serving her new calling to God as a parish pastor on the South Side of Chicago. Haunted by a terrifying past, Leonas skills as a defender of America against threats both foreign and domestic conflict with her conscience, which is shaped by her faith and her compassion for both friends and enemies.

Leona uncovers a terrorist plot hatched by American mercenaries, who plan to blame Iran, thus threatening a war that will make them rich. She divests her clerical collar to pack her .45 Kimber Super Match II and rallies a counter-terrorist alliance of professional crime fighters and black gang members. The story climaxes with a drone helicopter attack on the 85th floor of the John Hancock Building, intended to assassinate the president.

Only Leona Foxx, her ragtag team of die-hards, her finely honed killer instincts, her arsenal of high-tech weapons, and her faith in God can avert the devastation that could result in the death of millions of innocents and manifest in hell on earth.

Discover and memorize Leonas Law of Evil: You know its the voice of Satan when you hear the call to shed innocent blood.

God. She started a prayer. Her thoughts drifted. As if in a theater seat, she watched her lifes past dramas. The faces of the three young men who put her life in peril at the Cheltenham station flashed on her mental stage. She relived the terrifying moment she saw the northbound train about to decapitate her. Then Orpah Tinnen walked into the scene. Leona thought of her son, Magnus, decapitated by the Iranian military. She remembered her moment in the church kitchen, her moment of remembrance of the blood-spattered chest of the executed prisoner.

God, she muttered. She paused. God, you have got such a fucked up world. Why did you put me here like a pin cushion to feel every prick of its pain? Yes, I want to love your world as much as you do. But, goddammit, its hard. Id like to ask the Holy Spirit for the wisdom and strength to trust in what I cannot see. But, goddammit, Im too pissed off to think its worthwhile. I hope your grace covers me. Amen.

Leona Foxx is a black op with a white collar, who worships at two altars, her country and her God. She fights with ferocity for both.

The woman pastor from Chicago, Leona Foxx, takes on renegade Transhumanists making themselves kingmakers by selling espionage technology. Leonas strategy is to turn superintelligence against itself in order to preserve global peace. Can a mere human prevail against the posthuman?

If you want to grasp the promises and risks of enhancing human intelligence given us by our transhumanist friends, readCyrus Twelve.

Blood sacrifice. Could there be anything more evil? What happens when the symbols of grace get turned upside down? Are we left without hope?

Set in the Adirondack Mountains, the clash between good and evil escapes its local confines to threaten the nation and even engulf the globe. The selling of souls to perdition fuels the fires of hell so that we on Earth cannot avoid the heat.

Discover and memorize Leonas Law of Evil: You know its the voice of Satan when you hear the call to shed innocent blood. On the shores and islands of Lake George, certain ears hear this call. Leona swims into action to stop the bloodshed.

Nature is blood red in tooth and claw. Although these are the words of poet Alfred Lord Tennyson in the dinosaur canto of his In Memoriam, Jack London (1876-1916) conveyed their truth with convulsive drama, vicious gore, and unspeakable cruelty.

In what I nickname Londons Wolf Troika, we read in The Call of the Wild how a San Francisco dog, Buck, goes to Alaska and becomes a wolf. In the next,White Fang,an Alaska wolf moves to San Francisco and becomes a dog. In the third of the troika, The Sea Wolf, a Norwegian ship captain named Wolf Larsen exhibits the traits of both civilized human and atavistic beast. Framed in terms of Darwinian evolution, Londons characters demonstrate that the primeval wolf lives on today in both our dogs and our dog owners.

Londons moral is this: never rest unawarely with peaceful civilization. At any moment civilization can erupt like a volcano and extravasate wolf-like fury, barbarity, and savagery. Our evolutionary past ever threatens to rise up with consuming cruelty, demolishing all that generations have patiently put together. Within the language of evolution, London describes original and inherited sin.

As an addendum, I add what may be the final short story London wrote, The Red One. When we to turn The Red One of 1916, it appears London was hoping for grace from heaven.

Now, London was a Darwinian naturalist. Not overtly religious. Yet, London intuitively recognized our desperate need for grace. On our own, our human species is unable to evolve fast enough or advance far enough to escape our wolf genes. Might visitors from heaven provide a celestial technology that couldby gracelead to our transformation? Might grace from heaven come in the form of a UFO from outer space? Four decades before the June 1947 sighting of flying saucers, Londons imaginative mind was soaring to extraterrestrial civilizations that could save us from ourselves on earth.

Because my method in Literature and Religion relies on a theology-of-culture, Im searching for different treasures than other London interpreters. Ive come to admire two generations of Jack London aficionados and scholars now who have fertilized and pruned this literary tradition. Ive benefitted greatly be meeting some of the Jack London Society sockdolagers such as Russ and Winnie Kingman, who produced A Pictorial Life of Jack London. Over the years Ive benefited greatly from devouring essays and books by Earle Labor, Jeanne Campbell Reesman, Clarice Stasz, Richard Rocco, Kenneth Brandt, and others. Ive begun reading the multi-volume behemoth intellectual biography of Jack London, Author Under Sail, by Jay Williams. There are more facts in Williams compilation that the Encyclopedia Britannica could dream of. And, of course, dont miss Jay Cravens new film, Jack Londons Martin Eden.

I am currently working on thisPatheosseries dealing with Jack Londons Wolf Troika. Here is what to expect.

Jack London 1: The Call of the Wild

Jack London 2: White Fang

Jack London 3: The Sea Wolf

Jack London 4: Lone Wolf Ethics

Jack London 5: Wolf Pack Ethics

Jack London 6: Wolf & Lamb Ethics

Jack London 7: The Red One

Literature and Religion: both writing and reading in search of divine grace.

Ted Peters pursues Public Theology at the intersection of science, religion, ethics, and public policy. Peters is an emeritus professor at the Graduate Theological Union, where he co-edits the journal, Theology and Science, on behalf of the Center for Theology and the Natural Sciences, in Berkeley, California, USA. His book, God in Cosmic History, traces the rise of the Axial religions 2500 years ago. He previously authored Playing God? Genetic Determinism and Human Freedom? (Routledge, 2nd ed., 2002) as well as Science, Theology, and Ethics (Ashgate 2003). He is editor of AI and IA: Utopia or Extinction? (ATF 2019). Along with Arvin Gouw and Brian Patrick Green, he co-edited the new book, Religious Transhumanism and Its Critics hot off the press (Roman and Littlefield/Lexington, 2022). Soon he will publish The Voice of Christian Public Theology (ATF 2022). See his website: TedsTimelyTake.com. His fictional spy thriller, Cyrus Twelve, follows the twists and turns of a transhumanist plot.

Original post:

Literature and Religion | Literature and Religion - Patheos

Posted in Superintelligence | Comments Off on Literature and Religion | Literature and Religion – Patheos

Why AI will never rule the world – Digital Trends

Posted: September 27, 2022 at 7:42 am

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI specifically of the machine learning type thats able to take on new information and rewrite its code accordingly will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. Were literally building our soon-to-be-sentient successors.

Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German AI company Cognotekt argue that human intelligence wont be overtaken by an immortal dictator any time soon or ever. They told Digital Trends their reasons why.

Digital Trends (DT): How did this subject get on your radar?

Jobst Landgrebe (JL): Im a physician and biochemist by training. When I started my career, I did experiments that generated a lot of data. I started to study mathematics to be able to interpret these data, and saw how hard it is to model biological systems using mathematics. There was always this misfit between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I was trying to build AI systems to mimic what human beings can do. I realized that I was running into the same problem that I had years before in biology.

Customers said to me, why dont you build chatbots? I said, because they wont work; we cannot model this type of system properly. That ultimately led to me writing this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I had already inklings of similar problems with AI, but I had never thought them through. Initially, we wrote a paper called Making artificial intelligence meaningful again. (This was in the Trump era.) It was about why neural networks fail for language modeling. Then we decided to expand the paper into a book exploring this subject more deeply.

DT: Your book expresses skepticism about the way that neural networks, which are crucial to modern deep learning, emulate the human brain. Theyre approximations, rather than accurate models of how the biological brain works. But do you accept the core premise that it is possible that, were we to understand the brain in granular enough detail, it could be artificially replicated and that this would give rise to intelligence or sentience?

JL: The name neural network is a complete misnomer. The neural networks that we have now, even the most sophisticated ones, have nothing to do with the way the brain works. The view that the brain is a set of interconnected nodes in the way that neural networks are built is completely nave.

If you look at the most primitive bacterial cell, we still dont understand even how it works. We understand some of its aspects, but we have no model of how it works let alone a neuron, which is much more complicated, or billions of neurons interconnected. I believe its scientifically impossible to understand how the brain works. We can only understand certain aspects and deal with these aspects. We dont have, and we will not get, a full understanding of how the brain works.

If we had a perfect understanding of how each molecule of the brain works, then we could probably replicate it. That would mean putting everything into mathematical equations. Then you could replicate this using a computer. The problem is just that we are unable to write down and create those equations.

BS: Many of the most interesting things in the world are happening at levels of granularity that we cannot approach. We just dont have the imaging equipment, and we probably never will have the imaging equipment, to capture most of whats going on at the very fine levels of the brain.

This means that we dont know, for instance, what is responsible for consciousness. There are, in fact, a series of quite interesting philosophical problems, which, according to the method that were following, will always be unsolvable and so we should just ignore them.

Another is the freedom of the will. We are very strongly in favor of the idea that human beings have a will; we can have intentions, goals, and so forth. But we dont know whether or not its a free will. That is an issue that has to do with the physics of the brain. As far as the evidence available to us is concerned, computers cant have a will.

DT: The subtitle of the book is artificial intelligence without fear. What is the specific fear that you refer to?

BS: That was provoked by the literature on the singularity, which I know youre familiar with. Nick Bostrom, David Chalmers, Elon Musk, and the like. When we talked with our colleagues in the real world, it became clear to us that there was indeed a certain fear among the populace that AI would eventually take over and change the world to the detriment of humans.

We have quite a lot in the book about the Bostrum-type arguments. The core argument against them is that if the machine cannot have a will, then it also cannot have an evil will. Without an evil will, theres nothing to be afraid of. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But thats because the machines are being managed by people with evil ends. But then its not AI that is evil; its the people who build and program the AI

DT: Why does this notion of the singularity or artificial general intelligence interest people so much? Whether theyre scared by it or fascinated by it, theres something about this idea that resonates with people on a broad level.

JL: Theres this idea, started at the beginning of the 19th century and then declared by Nietzsche at the end of that century, that God is dead. Since the elites of our society are not Christians anymore, they needed a replacement. Max Stirner, who was, like Karl Marx, a pupil of Hegel, wrote a book about this, saying, I am my own god.

If you are God, you also want to be a creator. If you could create a superintelligence then you are like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We dont talk about this in the book, but that explains to me why this idea is so attractive in our times in which there is no transcendent entity anymore to turn to.

DT: Interesting. So to follow that through, its the idea that the creation of AI or the aim to create AI is a narcissistic act. In that case, the concept that these creations would somehow become more powerful than we are is a nightmarish twist on that. Its the child killing the parent.

JL: A bit like that, yes.

DT: What for you would be the ultimate outcome of your book if everyone was convinced by your arguments? What would that mean for the future of AI development?

JL: Its a very good question. I can tell you exactly what I think would happen and will happen. I think in the midterm people will accept our arguments, and this will create better-applied mathematics.

Something that all great mathematicians and physicists are completely aware of was the limitations of what they could achieve mathematically. Because they are aware of this, they focus only on certain problems. If you are well aware of the limitations, then you go through the world and look for these problems and solve them. Thats how Einstein found the equations for Brownian motion; how he came up with his theories of relativity; how Planck solved blackbody radiation and thus initiated the quantum theory of matter. They had a good instinct for which problems are amenable to solutions with mathematics and which are not.

If people learn the message of our book, they will, we believe, be able to engineer better systems, because they will concentrate on what is truly feasible and stop wasting money and effort on something that cant be achieved.

BS: I think that some of the message is already getting through, not because of what we say but because of the experiences people have when they give large amounts of money to AI projects, and then the AI projects fail. I guess you know about the Joint Artificial Intelligence Center. I cant remember the exact sum, but I think it was something like $10 billion, which they gave to a famous contractor. In the end, they got nothing out of it. They canceled the contract.

(Editors note: JAIC, a subdivision of the United States Armed Forces, was intended to accelerate the delivery and adoption of AI to achieve mission impact at scale. It was folded into a larger unified organization, the Chief Digital and Artificial Intelligence Officer, with two other offices in June this year. JAIC ceased to exist as its own entity.)

DT: What do you think, in high-level terms, is the single most compelling argument that you make in the book?

BS: Every AI system is mathematical in nature. Because we cannot model consciousness, will, or intelligence mathematically, these cannot be emulated using machines. Therefore, machines will not become intelligent, let alone superintelligent.

JL: The structure of our brain only allows limited models of nature. In physics, we pick a subset of reality that fits to our mathematical modeling capabilities. That is how Newton, Maxwell, Einstein, or Schrdinger obtained their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are those which we use to engineer technology. We are unable to create a complete mathematical model of animate nature.

This interview has been edited for length and clarity.

Read the original:

Why AI will never rule the world - Digital Trends

Posted in Superintelligence | Comments Off on Why AI will never rule the world – Digital Trends

Why DART Is the Most Important Mission Ever Launched to Space – Gizmodo Australia

Posted: at 7:42 am

Later today, NASAs DART spacecraft will attempt to smash into a non-threatening asteroid. Its one of the most important things weve done in space if not the most important thing as this experiment to deflect a non-threatening asteroid could eventually result in a robust and effective planetary defence strategy for protecting life on Earth.

Weve landed humans on the Moon, transported rovers to Mars, and sent spacecraft to interstellar space, yet nothing compares to what might happen today when NASAs DART spacecraft smashes into Dimorphos, the smaller member of the Didymos binary asteroid system. Should all go according to plan, DART will smash directly into the 160-metre wide asteroid at 9:14 a.m. AEST (watch it live here) and change the rocks speed by around 1%. Thats a small orbital adjustment for an asteroid, but a giant leap for humankind.

NASAs DART mission, short for Double Asteroid Redirection Test, wont mean that we suddenly have a defence against threatening asteroids, but it could demonstrate a viable strategy for steering dangerous asteroids away from Earth. Itll be many more years before our competency in this area fully matures, but it all starts today with DART.

At a NASA press briefing on September 22, Lindley Johnson, manager of NASAs Near-Earth Object Observations program, described DART as one of the most important missions in space history but also in the history of humankind. I wholeheartedly agree. Missions to the Moon, Mars, and Pluto are important and monumental in their own right, but this proof-of-concept experiment could literally lead to defensive measures against an existential threat. So yeah, pretty damned important.

The dino-extinguishing asteroid measured somewhere between 10-15 kilometres wide and was travelling around 13 km per second when it struck Mexicos Yucatan Peninsula some 66 million years ago. The collision wiped out 75% of all species on Earth, including every animal larger than a cat. And of course, it ended the 165-million-year reign of non-avian dinosaurs.

Asteroids of that size dont come around very often, but thats not to say our planet is immune from plus-sized space rocks. Recent research estimates that somewhere between 16 and 32 asteroids larger than 5 km wide strike Earth once every billion years. Thats about once every 30 million to 65 million years. That said, impacts with asteroids wider than 10 km are exceptionally rare, happening once every 250 million to 500 million years.

Despite the infrequency of these events, its the kind of impact that would wipe out our civilisation. Developing the means to defend ourselves is obviously a smart idea, but the threat of colossal asteroids isnt what keeps me up at night its the smaller ones that are much more likely to strike our planet.

The Southwest Research Institute says our atmosphere shreds most incoming asteroids smaller than 50 metres in diameter. Objects that reach the surface, including objects smaller than 2 km in size, can cause tremendous damage at local scales, such as wiping out an entire city or unleashing a catastrophic tsunami. As Johnson explained during the DART press briefing, asteroids the size of Dimorphos strike Earth about once every 1,000 years. The solar system is home about a million asteroid larger than 49.99 m wide. An estimated 2,000 near-Earth objects (NEOs) are larger than 2 km wide. Impacting asteroids at sizes around 2 km will produce severe environmental damage on a global scale, according to SWRI. And as noted, impacting asteroids wider than 10 km can induce mass extinctions.

NASA categorizes asteroids as being potentially hazardous if theyre 30 to 50 metres in diameter or larger and their orbit around the Sun brings them to within 8 million km of Earths orbit. The space agency works to detect and track these objects with ground- and space-based telescopes, and its Centre for Near Earth Object Studies keeps track of all known NEOs to assess potential impact risks.

As it stands, no known threat to Earth exists within the next 100 years. NASA is currently monitoring 28,000 NEOs, but astronomers detect around 3,000 each year. Theres a chance that a newly detected asteroid is on a collision course with Earth, in which case a DART-like mitigation would come in handy. But as Johnson explained, this type of scenario and our ensuing response wont likely resemble the way theyre depicted in Hollywood films, in which we typically have only a few days or months to react. More plausibly, wed have a few years or decades to mount a response, he said.

To protect our planet against these threats, Johnson pointed to two key strategies: detection and mitigation. NASAs upcoming Near-Earth Object Surveyor, or NEO Surveyor, will certainly help with detection, with the asteroid-hunting spacecraft expected to launch in 2026. DART is the first of hopefully many mitigation experiments to develop a planetary shield against hazardous objects.

DART is a test of a kinetic impactor, but scientists could develop a host of other strategies, such as using gravity tractors or nuclear devices, the latter of which could be surprisingly effective at least according to simulations. The type of technique employed will largely depend on factors having to do with the specific asteroid in question, such as its size and density. Kinetic impactors, for example, may be useless against so-called rubble pile asteroids, which feature loose conglomerations of surface material. Dimorphos is not expected to be a rubble pile, but we wont know until DART smashes into it. As Johnson said, planetary defence is applied planetary science.

A case can be made that space experiments to help us live off-planet are more important than asteroid deflection schemes. Indeed, we currently lack the ability to live anywhere other than Earth, which limits our ability to save ourselves from emerging existential risks, such as run-away global warming, malign artificial superintelligence, or molecular nanotechnology run amok.

Yes, its important that we strive to become a multi-planet species and not have all our eggs in one basket, but thats going to take a very long time for us to realise, while the threat of an incoming asteroid could emerge at any time. Wed best be ready to meet that sort of threat, while steadily developing our capacity to live off-planet.

More conceptually, the DART experiment is our introduction to solar system re-engineering. Subtly altering the orbit of a tiny asteroid is a puny first step, but our civilisation is poised to engage in more impactful interventions, as we re-architect our immediate celestial surroundings to make it safer or find better ways of exploiting all that our solar system has to offer. These more meaningful interventions, in addition to removing asteroid threats, could involve the geoengineering of planets and moons or even tweaking the Sun to make it last longer.

But Im getting a bit ahead of myself. First things first and fingers firmly crossed that DART will successfully smash into its unsuspecting target later today.

Link:

Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia

Posted in Superintelligence | Comments Off on Why DART Is the Most Important Mission Ever Launched to Space – Gizmodo Australia

Page 4«..3456..1020..»