Page 5«..4567..1020..»

Category Archives: Superintelligence

‘Sweet Home Alabama’ turns 20: See how the cast has aged – Wonderwall

Posted: September 27, 2022 at 7:42 am

By Neia Balao 2:02am PDT, Sep 27, 2022

You can take the girl out of the honky tonk, but you can't take the honky tonk out of the girl! Believe it or not, it's been two decades since we were first introduced to and fell in love with Reese Witherspoon's adorable Southern belle-turned-New York City socialite Melanie Smooter (err Carmichael). To mark the romantic comedy's 20th anniversary on Sept. 27, 2022, Wonderwall.com is checking in on Reese and the film's other stars to see how they've aged and what they're up to all these years later!

Keep reading for more

RELATED: Celeb cheating scandals of 2022

By the time she starred in "Sweet Home Alabama," Hollywood darling Reese Witherspoon had already appeared in two buzzy and now-iconic films: "Cruel Intentions" and "Legally Blonde." In 2005 at 29, Reese achieved a career milestone when she earned the Academy Award for best actress for her performance as June Carter Cash in "Walk the Line." Under her production company Hello Sunshine, Reese has shifted her focus to television, having starred on and produced HBO's "Big Little Lies" (for which she earned an Emmy for outstanding limited series), Hulu's "Little Fires Everywhere" and Apple TV+'s "The Morning Show," on which she currently stars with Jennifer Aniston. After divorcing "Cruel Intentions" co-star Ryan Phillippe, with whom she has two kids who are now adults, Reese found love with talent agent Jim Toth. They married in 2011 and welcomed son Tennessee in 2012.

RELATED: Reese Witherspoon's life in photos

Josh Lucas played Jake Perry, Melanie's big first love and estranged husband who never left Pigeon Creek, Alabama.

Josh Lucas landed roles in a slew of flicks including "Hulk," "Poseidon" and "Life as We Know It." More recently, he appeared in the Oscar-winning sports drama "Ford v Ferrari" and "The Forever Purge." He's found success on the small screen too, including a stint on Paramount's neo-Western drama "Yellowstone." Josh was married to Jessica Ciencin Henriquez from 2012 to 2014. They share a son, Noah.

Patrick Dempsey portrayed Andrew Hennings, Melanie's super-handsome (and super-dreamy!) fianc in New York City.

Many of us know where Patrick Dempsey ended up: He played as McDreamy on "Grey's Anatomy" for 10 years. In addition to starring on the hit Shonda Rhimes series earning two SAG Awards and some Golden Globe nominations along the way Patrick also had leading man roles in films like "Made of Honor," "Valentine's Day," "Enchanted" and "Bridget Jones's Baby." Patrick, an auto racing enthusiast who's competed in a few races over the years, has been married to makeup artist Jillian Fink, with whom he shares three kids, since 1999.

Candice Bergen played Kate Hennings, the mayor of New York City who's Andrew's mother. She's extremely suspicious of Melanie and her intentions with her son.

Candice Bergen is no stranger to fame and critical acclaim! In fact, the actress had already earned Oscar and BAFTA nominations and won Golden Globes and Emmys long before she appeared in "Sweet Home Alabama." After her portrayal as the conniving New York City politician, the actress appeared on ABC's "Boston Legal" and a reboot of her hit series "Murphy Brown" as well as films like "Bride Wars," "The Meyerowitz Stories," "Book Club" and, more recently, "Let Them All Talk."

Nathan Lee Graham portrayed glamorous and sartorially savvy Frederick Montana, Melanie's fashion mentor and close friend. Rhona Mitra played one of Melanie's best friends in New York City, model Tabatha Wadmore-Smith.

Three years after "Sweet Home Alabama" came out, Nathan Lee Graham appeared in another great romantic comedy: "Hitch." In addition to starring on the HBO series "The Comeback," Nathan who's also a Broadway actor and Grammy winner landed a role on the short-lived "Riverdale" spinoff series "Katy Keene," had guest-starring stints on "Scrubs" and "Law & Order: Special Victims Unit" and reprised his "Zoolander" role in the 2016 sequel.

Rhona Mitra has mainly found success on the small screen, landing recurring roles on "The Practice," "Boston Legal," "Nip/Tuck" and "The Last Ship." She appeared in 2009's "Underworld: Rise of the Lycans" the film franchise's third installment. More recently, the British actress-model played Mercy Graves on The CW's "Supergirl."

Jean Smart took on the role of Jake's affectionate and caring mother, Stella Kay Perry.

What's Jean Smart up to these days? A lot, actually! The Tony-nominated five-time Emmy winner went on to appear on several TV shows like "24," "Samantha Who?," "Dirty John," "Watchmen" and "Mare of Easttown" plus a slew of films including "Garden State," "Life As We Know It," "A Simple Favor" and, more recently, "Superintelligence." In 2022, she took home the Emmy, Golden Globe and SAG Awards for best lead actress in a comedy series for her performance on "Hacks."

Ethan Embry played one of Melanie's closest childhood friends, Bobby Ray.

Ethan Embry already had fan-favorite '90s flicks "Empire Records," "Can't Hardly Wait" and "That Thing You Do!" under his belt by the time "Sweet Home Alabama" hit theaters. The California native has gone on to appear on the television shows "Brotherhood," "Once Upon a Time," "Sneaky Pete," "Grace and Frankie" and "Stargirl." In 2015, he remarried second wife Sunny Mabrey.

Mary Kay Place played Melanie's micromanaging mother, Pearl Smooter.

Before "Sweet Home Alabama," Mary Kay Place was best known for her work in films like "Being John Malkovich" and "Girl, Interrupted." After appearing in the Reese Witherspoon-led flick, Mary Kay starred on three buzzy HBO series "Big Love," "Bored to Death" and "Getting On" as well as shows like "Lady Dynamite," "Imposters" and "9-1-1: Lonestar." She's also continued to act in movies, popping up in "The Hollars," "Diane," "The Prom" and "Music" in recent years.

Fred Ward played Earl Smooter, Melanie's soft-spoken father.

Fred Ward, who was already an established actor by the time he landed his "Sweet Home Alabama" role, appeared in a handful of lesser known films before going on a brief acting hiatus in 2006. He made his return to the small screen with appearances on "ER" and "Grey's Anatomy." His last credited role came in 2015 on an episode of "True Detective." Fred died at 79 in May 2022.

Read more:

'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall

Posted in Superintelligence | Comments Off on ‘Sweet Home Alabama’ turns 20: See how the cast has aged – Wonderwall

Research Shows that Superintelligent AI is Impossible to be Controlled – Analytics India Magazine

Posted: September 24, 2022 at 8:52 pm

A group of researchers have come to the terrifying conclusion that containing super-intelligence AI may not be possible. They claim that controlling the AI would fall beyond human comprehension.

According to the Journal of Artificial Intelligence Research, in the paper titled, Superintelligence Cannot be Contained: Lessons from Computability Theory, researchers have argued that total containment (in principle) would be impossible due to fundamental limits inherent to computing. It further claims that it is mathematically impossible for humans to calculate an AIs plans, thereby making it uncontainable.

Sign up for your weekly dose of what's up in emerging technology.

The authors cite that implementing a rule for artificial intelligence to cause no harm to humans would not be an option if humans cannot predict the scenarios that an AI may come up with. They believe that while a computer system is working on an independent level, humans can no longer set limits. The teams reasoning was inspired in part by Alan Turings formulation of the halting problem in 1936. The problem centres on knowing whether a computer programme will reach a conclusion or an answer; either making it halt or simply loop forever trying to find one.

An excerpt of the paper reads, This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.

Computer scientist, Iyad Rahwan, Max-Planck Institute for Human Development, Germany said In effect, this makes the containment algorithm unusable. Meaning, machines perform certain important tasks independently, without the programmers fully understanding how they learned it.

However, alternatives have been suggested by the researchers on teaching AI some ethics. Limiting the potential of superintelligence could prevent AIs from annihilating the world, even if they remain unpredictable.

Link:

Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine

Posted in Superintelligence | Comments Off on Research Shows that Superintelligent AI is Impossible to be Controlled – Analytics India Magazine

Eight best books on AI ethics and bias – INDIAai

Posted: September 20, 2022 at 8:49 am

Moral guidelines that help us distinguish between right and wrong are a part of ethics. AI ethics is a set of rules that advise how to make AI and what it should do. People have all kinds of cognitive biases, like recency and confirmation biases. These biases appear in our actions and, as a result, in our data.

Several books focus on ethics and bias in AI so people can learn more about them and understand AI better.

AI Ethics - Mark Coeckelbergh

Mark Coeckelbergh talks about important stories about AI, such as transhumanism and technological singularity. He looks at critical philosophical debates, such as questions about the fundamental differences between humans and machines and arguments about the moral status of AI. He talks about the different ways AI can be used and focuses on machine learning and data science. He gives an overview of critical ethical issues, such as privacy concerns, responsibility and the delegation of decision-making, transparency, and bias at all stages of the data science process. He also thinks about how work will change in an AI economy. Lastly, he looks at various policy ideas and discusses policymakers' problems. He argues for ethical practices that include a vision of the good life and the good society and builds values into the design process.

This book in the Essential Knowledge series from MIT Press summarises these issues. AI Ethics, written by a tech philosopher, goes beyond the usual hype and nightmare scenarios to answer fundamental questions.

Heartificial Intelligence: Embracing Our Humanity to Maximise Machines (2016) - John C Havens

The ideas in this book are economics, new technologies, and positive psychology. The book gives the first values-driven approach to algorithmic living. It is a definitive plan to help people live in the present and define their future in a good way. Each chapter starts with a made-up story to help readers imagine how they would react in different AI situations. The book shows a vivid picture of what our lives might be like in a dystopia where robots and corporations rule or in a utopia where people use technology to improve their natural skills and become a long-lived, super-smart, and kind species.

Life 3.0: Being Human in the Age of Artificial Intelligence - Max Tegmark

The book starts by imagining a world where AI is so intelligent that it has surpassed human intelligence and is everywhere. Then, Tegmark talks about the different stages of human life from the beginning. He calls the biological origins of humans "Life 1.0," cultural changes "Life 2.0," and the technological age of humans "Life 3.0." The book is mostly about "Life 3.0" and new technologies like artificial general intelligence, which may be able to learn and change its hardware and internal structure in the future.

Our Final Invention: Artificial Intelligence and the End of the Human Era - James Barrat

James Barrat weaves together explanations of AI ideas, the history of AI, and interviews with well-known AI researchers like Eliezer Yudkowsky and Ray Kurzweil. The book describes how artificial general intelligence could improve itself repeatedly to become an artificial superintelligence. Furthermore, Barrat uses a warning tone throughout the book, focusing on the dangers that artificial superintelligence poses to human life. Barrat stresses how hard it would be to control or even predict the actions of something that could become many times smarter than the most intelligent humans.

Artificial Unintelligence: How Computers Misunderstand the World - Meredith Broussard

This book helps us understand how technology works and what its limits are. It also explains why we shouldn't always assume that computers are suitable. The writer does a great job of bringing up the issues of algorithmic bias, accountability, and representation in a tech field where men are the majority. The book gives a detailed look at AI's social, legal, and cultural effects on the public, along with a call to design and use technologies that help everyone.

Moral Machines: Teaching Robots Right from Wrong - Wendell Wallach and Colin Allen

The book's authors argue that moral judgment must be programmed into robots to ensure our safety. The authors say that even though full moral agency for machines is still a ways off, it is already necessary to develop a functional morality in which artificial moral agents have some essential ethical sensitivity. They do this by taking a quick tour of philosophical ethics and AI. However, the conventional ethical theories appear insufficient, necessitating the development of more socially conscious and exciting robots. Finally, the authors demonstrate that efforts are underway to create machines that can distinguish between right and wrong.

Superintelligence: Paths, Dangers, Strategies - Nick Bostrom

Nick Bostrom, a Swedish philosopher at the University of Oxford, wrote the 2014 book Superintelligence: Paths, Dangers, and Strategies. It says that if machine brains are more intelligent than human brains, this new superintelligence could replace humans as the most intelligent species on Earth. Moreover, smart machines could improve their abilities faster than human computer scientists, which could be a disaster for humans on a fundamental level.

Furthermore, no one knows if AI on par with humans will come in a few years, later this century, or not until the 21st or 22nd century. No matter how long it takes, once a machine has human-level intelligence, a "superintelligent" system in almost all domains of interest" would come along surprisingly quickly, if not immediately. A superintelligence like this would be hard to control or stop.

Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI - Reid Blackman

Reid Blackman tells you everything you need to know about AI ethics as a risk management challenge in his book Ethical Machines. He will help you build, buy, and use AI ethically and safely for your company's reputation, legal standing, and compliance with rules. And he will help you do this at scale. Don't worry, though. The book's purpose is to help you get work done, not to make you think about deep, existential questions about ethics and technology. Blackman's writing is clear and easy to understand, which makes it easy to understand a complicated and often misunderstood idea like ethics.

Most importantly, Blackman makes ethics doable by addressing AI's three most significant ethical risksbias, explainability, and privacyand telling you what to do (and what not to do) to deal with them. Ethical Machines is the only book you need to ensure your AI helps your company reach its goals instead of hurting them. It shows you how to write a strong statement of AI ethics principles and build teams that can evaluate ethical risks well.

The rest is here:

Eight best books on AI ethics and bias - INDIAai

Posted in Superintelligence | Comments Off on Eight best books on AI ethics and bias – INDIAai

AI Art Is Here and the World Is Already Different – New York Magazine

Posted: at 8:49 am

Photo-Illustration: Intelligencer; Photo: Getty Images

This article was featured in One Great Story, New Yorks reading recommendation newsletter. Sign up here to get it nightly.

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! Theyve been telling reporters things like Everythings in bloom, Billions of lives will be affected, and I know a person when I talk to it it doesnt matter whether they have a brain made of meat in their head.

We dont have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAIs GPT-3 took simple text prompts to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try Bart Simpson in the style of Soviet statuary; goldendoodle megafauna in the streets of Chelsea; a spaghetti dinner in hell; a logo for a carpet-cleaning company, blue and red, round; the meaning of life.

Through a million posts and memes, these tools have become the new face of AI.

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldnt have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole?Artists and designers who already feel underappreciated or exploited in their industries from concept artists in gaming and film and TV to freelance logo designers are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Ive spent time with the current versions of these tools, and theyre enormously fun. They also knock you off balance. Being able to generate images that look like photos, paintings, drawings or 3-D models doesnt make someone an artist, or good at painting, but it does make them able to create, in material terms, some approximation of what some artists produce, instantly and on the cheap. Knowing you can manifest whatever youre thinking about at a given moment also gestures at a strange, bespoke mode of digital communication, where even private conversations and fleeting ideas might as well be interpreted and illustrated. Why just describe things to people when you can ask a machine to show them?

Still, most discussions about AI media feel speculative. Googles Imagen and Parti are still in testing, while apps like Craiyon are fun but degraded tech demos. OpenAI is beginning the process of turning DALL-E 2 into a mainstream service, recently inviting a million users from its wait list, while the release of a powerful open-source model, Stable Diffusion, means lots more tools are coming.

Then theres Midjourney, a commercial product that has been open to the masses for months, through which users have been confronting, and answering, some more practical questions about AI-media generation. Specifically: What do people actually want from it, given the chance to ask?

Midjourney is unlike its peers in a few ways. Its not part of or affiliated with a major tech company or with a broader AI project. It hasnt raised venture capital and has just ten employees. Users can pay anywhere from $10 a month to $600 a year to generate more images, get access to new features, or acquire licensing rights, and thousands of people already have.

Its also basically just a chat room now, in fact, within a few months of its public launch, the largest on all of Discord, with nearly 2 million members. (For scale, this is more than twice the size of official servers for Fortnite and Minecraft.) Users summon images by prompting a bot, which attempts to fulfill their requests in a range of public rooms (#newbies, #show-and-tell, #daily-theme, etc.) or, for paid subscribers, in private direct messages. This bot passes along requests to Midjourneys software the AI which depends on servers rented from an undisclosed major cloud provider, according to founder David Holz. Requests are effectively thrown into a giant swirling whirlpool of 10,000 graphics cards, Holz said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. Almost all the money goes to paying for those machines, Holz said. New users are given a small number of free image generations before theyre cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs which are largely energy costsare why other services have been cautious about adding new users. Midjourney made a choice to just pass that expense along to users. If the goal is for this to be available broadly, the cloud needs to be a thousand times larger, Holz said.

A generation request to Midjourney by the author and the resulting image.

Setting aside, for now, the prospect of an AI-joke, image-induced energy-and-climate crisis, Midjourneys Discord is an interesting place to lurk. Users engineer prompts in broken and then fluent Midjourney-ese, ranging from simple to incomprehensible; talk with one another about AI art; and ask for advice or critique. Before the crypto crash, I watched users crank out low-budget NFT collections, with prompts like Iron Man in the style of Hayao Miyazaki, trading card. Early on, especially, there were demographic tells. There were lots of half-baked joke prompts about Walter White, video-game characters rendered in incongruous artistic styles, and, despite Midjourneys 1,000-plus banned-word list and active team of moderators, plenty of somewhat-to-very horny attempts to summon fantasy women who look like fandom-adjacent celebrities. Now, with a few hundred thousand people logged in at a time, its huge and disorienting.

The public parts of Midjourney Discord most resemble an industrial-scale automated DeviantArt, from which observers have suggested it has learned some common digital-art sensibilities. (DeviantArt has been flooded with Midjourney art, and some of its users are not happy.) Holz said that absent more specific instructions, Midjourney has settled on some default styles, which he describes as imaginative, surreal, sublime, and whimsical. (In contrast, DALL-E 2 could be said to favor photorealism.) More specifically, he said, it likes to use teal and orange. While Midjourney can be prompted to create images in the styles of dozens of artists living and dead, some of whom have publicly objected to the prospect, Holz said that it wasnt deliberately trained on any of them and that some have been pleased to find themselves in the model. If anything, we tend to have artists ask to copy them better.

Quite often, though, youll encounter someone gradually painstakingly refining a specific prompt, really working on something, and because youre in Discord, you can just ask them what theyre doing. User Pluckywood, real name Brian Pluckebaum, works in automotive-semiconductor marketing and designs board games on the side. One of the biggest gaps from the design of a board game to releasing the board game is art, he said. Previously, you were stuck with working through a publisher because an individual cant hire all these artists. To generate the 600 to 1,000 unique pieces of art he needs for the new game he is working on box art, character art, rule-book art, standee art, card art, card back, board art, lore-book art he sends Midjourney prompts like this:

character design, Alluring and beautiful female vampire, her hands are claws and shes licking one claw, gothic, cinematic, epic scene, volumetric lighting, extremely detailed, intricate details, painting by Jim Lee, low angle shot testp

Midjourney sends her back in a style that is somehow both anonymous and sort of recognizable, good enough to sustain a long glance but, as is still common with most generative-image tools, with confusing hands. Im not approaching publishers with a white-text blank game, Pluckebaum said. If theyre interested, they can hire artists to finish the job or clean things up; if theyre not, well, now he can self-publish.

Another Midjourney user, Gila von Meissner, is a graphic designer and childrens-book author-illustrator from the boondocks in north Germany. Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum, she brought up the balance of power with publishers. Picture books pay peanuts, she said. Most illustrators struggle financially. Why not make the work easier and faster? Its my character, my edits on the AI backgrounds, my voice, and my story. A process that took months now takes a week, she said. Does that make it less original?

Childrens book author Gila von Meissner is experimenting with using generative AI in her creative process. Illustration: Gila von Meissner

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (backgrounds, people at work, kids at school, etc.) for government websites, pamphlets, and literature: I get some of the benefits of using custom art not that we have a budget for commissions! without the paying-an-artist part. He said he has mostly replaced stock art, but hes not entirely comfortable with the situation. I have a number of friends who are commercial illustrators, and Ive been very careful not to show them what Ive made, he said. Hes convinced that tools like this could eventually put people in his trade out of work. But Im already in my 50s, he said, and I hope Ill be gone by the time that happens.

The prize-winning art in a Colorado contest was generated by AI. Photo: John Herrman

Variations of this prediction are common from different sides of the commission. An executive at an Australian advertising agency, for example, told me that his firm is looking into AI art as a solution for broader creative options without the need for large budgets in marketing campaigns, particularly for our global clients. Initially, the executive said, AI imagery put clients on the back foot, but theyve come around. Midjourney images are becoming harder for clients to distinguish from human-generated art and then theres the price. Being able to create infinite, realistic imagery time and time again has become a key selling point, especially when traditional production would have an enormous cost attached, the executive said.

Bruno Da Silva is an artist and design director at R/GA, a marketing-and-design agency with thousands of employees around the world. He took an initial interest in Midjourney for his own side projects and quickly found uses at work: First thing after I got an invite, I showed [Midjourney art] around R/GA, and my boss was like, What the fuck is that?

It quickly joined his workflow. For me, when Im going to sell an idea, its important to sell the whole thing the visual, the typeface, the colors. The client needs to look and see whats in my head. If that means hiring a photographer or an illustrator to make something really special in a few days or a week, thats going to be impossible, he said. He showed me concept art that hed shared with big corporate clients during pitches to a mattress company, a financial firm, an arm of a tech company too big to describe without identifying that had been inspired or created in part with Midjourney.

Image generators, Da Silva said, are especially effective at shaking loose ideas in the early stages of a project, when many designers are otherwise scrounging for references and inspiration on Google Images, Shutterstock, Getty Images, or Pinterest or from one anothers work.

These shallow shared references have led to a situation in which everything looks the same, Da Silva said. In design history, people used to work really hard to make something new and unique, and were losing that. This could double as a critique of art generators, which have been trained on some of the same sources and design work, but Da Silva doesnt see it that way. Were already working as computers really fast. Its the same process, same brief, same deadline, he said. Now were using another computer to get out of that place.

I think our industry is going to change a lot in the next three years, he said.

Ive been using and paying for Midjourney since June. According to Holz, I fit the most common user profile: people who are experimenting, testing limits, and making stuff for themselves, their families, or their friends. I burned through my free generations within a few hours, spamming images into group chats and work Slacks and email threads.

A vast majority of the images Ive generated have been jokes most for friends, others between me and the bot. Its fun, for a while, to interrupt a chat about which mousetrap to buy by asking a supercomputer for a horrific rendering of a man stuck in a bed of glue or to respond to a shared Zillow link with a rendering of a McMansion Pyramid of Giza. When a friend who had been experimenting with DALL-E 2 described the tool as a place to dispose of intrusive thoughts, I nodded, scrolling back in my Midjourney window to a pretty convincing take on Joe Biden tanning on the beach drawn by R. Crumb.

I still use Midjourney this way, but the novelty has worn off, in no small part because the renderings have just gotten better less strange and beautiful than competent and plausible. The bit has also gotten stale, and Ive mapped the narrow boundaries of my artistic imagination. A lot of the AI art that has gone viral was generated from prompts that produced just the right kind of result: close enough to be startling but still somehow off, through a misinterpreted word, a strange artifact that turned the image macabre, or a fully haywire conceptual interpolation. Surprising errors are AI imagerys best approximation of genuine creativity, or at least its most joyful. TikToks primitive take on an image generator, which it released last month, embraces this.

When AI art fails a little, as it has consistently in this early phase, its funny. When it simply succeeds, as it will more and more convincingly in the months and years ahead, its just, well, automation. There is a long and growing list of things people can command into existence with their phones, through contested processes kept hidden from view, at a bargain price: trivia, meals, cars, labor. The new AI companies ask, Why not art?

The one story you shouldnt miss today, selected byNew Yorks editors.

By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.

More:

AI Art Is Here and the World Is Already Different - New York Magazine

Posted in Superintelligence | Comments Off on AI Art Is Here and the World Is Already Different – New York Magazine

Would "artificial superintelligence" lead to the end of life on Earth …

Posted: September 14, 2022 at 12:47 am

The activist group Extinction Rebellion has been remarkably successful at raising public awareness of the ecological and climate crises, especially given that it was established only in 2018.

The dreadful truth, however, is that climate change isn't the only global catastrophe that humanity confronts this century. Synthetic biology could make it possible to create designer pathogens far more lethal than COVID-19, nuclear weapons continue to cast a dark shadow on global civilization and advanced nanotechnology could trigger arms races, destabilize societies and "enable powerful new types of weaponry."

Yet another serious threat comes from artificial intelligence, or AI. In the near-term, AI systems like those sold by IBM, Microsoft, Amazon and other tech giants could exacerbate inequality due to gender and racial biases. According to a paper co-authored by Timnit Gebru, the former Google employee who was fired "after criticizing its approach to minority hiring and the biases built into today's artificial intelligence systems," facial recognition software is "less accurate at identifying women and people of color, which means its use can end up discriminating against them." These are very real problems affecting large groups of people that require urgent attention.

But there are also longer-term risks, as well, arising from the possibility of algorithms that exceed human levels of general intelligence. An artificial superintelligence, or ASI, would by definition be smarter than any possible human being in every cognitive domain of interest, such as abstract reasoning, working memory, processing speed and so on. Although there is no obvious leap from current "deep-learning" algorithms to ASI, there is a good case to make that the creation of an ASI is not a matter of if but when: Sooner or later, scientists will figure out how to build an ASI, or figure out how to build an AI system that can build an ASI, perhaps by modifying its own code.

When we do this, it will be the most significant event in human history: Suddenly, for the first time, humanity will be joined by a problem-solving agent more clever than itself. What would happen? Would paradise ensue? Or would the ASI promptly destroy us?

Even a low probability that machine superintelligence leads to "existential catastrophe" presents an unacceptable risk not just for humans but for our entire planet.

I believe we should take the argumentsfor why"a plausible default outcome of the creation of machine superintelligence is existential catastrophe" very seriously. Even if the probability of such arguments being correct is low, a risk is standardly defined as the probability of an event multiplied by its consequences. And since the consequences of total annihilation would be enormous, even a low probability (multiplied by this consequence) would yield a sky-high risk.

Even more, the very same arguments for why an ASI could cause the extinction of our species also lead to the conclusion that it could obliterate the entire biosphere. Fundamentally, the risk posed by artificial superintelligence is an environmental risk. It is not just an issue of whether humanity survives or not, but an environmental issue that concerns all earthly life, which is why I have been calling for an Extinction Rebellion-like movement to form around the dangers of ASI a threat that, like climate change, could potentially harm every creature on the planet.

Although no one knows for sure when we will succeed in building an ASI, one survey of experts found a 50 percent likelihood of "human-level machine intelligence" by 2040 and a 90 percent likelihood by 2075. A human-level machine intelligence, or artificial general intelligence, abbreviated AGI, is the stepping-stone to ASI, and the step from one to the other might be very small, since any sufficiently intelligent system will quickly realize that improving its own problem-solving abilities will help it achieve a wide range of "final goals," or the goals that it ultimately "wants" to achieve (in the same sense that spellcheck "wants" to correct misspelled words).

Furthermore, one study from 2020 reports that at least 72 research projects around the world are currently, and explicitly, working to create an AGI. Some of these projects are just as explicit that they do not take seriously the potential threats posed by ASI. For example, a company called 2AI, which runs the Victor project, writes on its website:

There is a lot of talklately about how dangerous it would be to unleash real AI on the world. A program that thinks for itself might become hell-bent on self preservation, and in its wisdom may conclude that the best way to save itself is to destroy civilization as we know it. Will it flood the internet with viruses and erase our data? Will it crash global financial markets and empty our bank accounts? Will it create robots that enslave all of humanity? Will it trigger global thermonuclear war? We think this is all crazy talk.

But is it crazy talk? In my view, the answer is no. The arguments for why ASI could devastate the biosphere and destroy humanity, which are primarily philosophical, are complicated, with many moving parts. But the central conclusion is that by far the greatest concern is the unintended consequences of the ASI striving to achieve its final goals. Many technologies have unintended consequences, and indeed anthropogenic climate change is an unintended consequence of large numbers of people burning fossil fuels. (Initially, the transition from using horses to automobiles powered by internal combustion engines was hailed as a solution to the problem of urban pollution.)

Most new technologies have unintended consequences, and ASI would be the most powerful technology ever created, so we should expect its potential unintended consequences to be massively disruptive.

An ASI would be the most powerful technology ever created, and for this reason we should expect its potential unintended consequences to be even more disruptive than those of past technologies. Furthermore, unlike all past technologies, the ASI would be a fully autonomous agent in its own right, whose actions are determined by a superhuman capacity to secure effective means to its ends, along with an ability to process information many orders of magnitude faster than we can.

Consider that an ASI "thinking" one million times faster than us would see the world unfold in super-duper-slow motion. A single minute for us would correspond to roughly two years for it. To put this in perspective, it takes the average U.S. student 8.2 years to earn a PhD, which amounts to only 4.3 minutes in ASI-time. Over the period it takes a human to get a PhD, the ASI could have earned roughly 1,002,306 PhDs.

This is why the idea that we could simply unplug a rogue ASI if it were to behave in unexpected ways is unconvincing: The time it would take to reach for the plug would give the ASI, with its superior ability to problem-solve, ages to figure out how to prevent us from turning it off. Perhaps it quickly connects to the internet, or shuffles around some electrons in its hardware to influence technologies in the vicinity. Who knows? Perhaps we aren't even smart enough to figure out all the ways it might stop us from shutting it down.

But why would it want to stop us from doing this? The idea is simple: If you give an algorithm some task a final goal and if that algorithm has general intelligence, as we do, it will, after a moment's reflection, realize that one way it could fail to achieve its goal is by being shut down. Self-preservation, then, is a predictable subgoal that sufficiently intelligent systems will automatically end up with, simply by reasoning through the ways it could fail.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

What, then, if we are unable to stop it? Imagine that we give the ASI the single goal of establishing world peace. What might it do? Perhaps it would immediately launch all the nuclear weapons in the world to destroy the entire biosphere, reasoning logically, you'd have to say that if there is no more biosphere there will be no more humans, and if there are no more humans then there can be no more war and what we told it to do was precisely that, even though what we intended it to do was otherwise.

Fortunately, there's an easy fix: Simply add in a restriction to the ASI's goal system that says, "Don't establish world peace by obliterating all life on the planet." Now what would it do? Well, how else might a literal-minded agent bring about world peace? Maybe it would place every human being in suspended animation, or lobotomize us all, or use invasive mind-control technologies to control our behaviors.

Again, there's an easy fix: Simply add in more restrictions to the ASI's goal system. The point of this exercise, however, is that by using our merely human-level capacities, many of us can poke holes in just about any proposed set of restrictions, each time resulting in more and more restrictions having to be added. And we can keep this going indefinitely, with no end in sight.

Hence, given the seeming interminability of this exercise, the disheartening question arises: How can we ever be sure that we've come up with a complete, exhaustive list of goals and restrictions that guarantee the ASI won't inadvertently do something that destroys us and the environment? The ASI thinks a million times faster than us. It could quickly gain access and control over the economy, laboratory equipment and military technologies. And for any final goal that we give it, the ASI will automatically come to value self-preservation as a crucial instrumental subgoal.

How can we come up with a list of goals and restrictions that guarantee the ASI won't do something that destroys us and the environment? We can't.

Yet self-preservation isn't the only subgoal; so is resource acquisition. To do stuff, to make things happen, one needs resources and usually, the more resources one has, the better. The problem is that without giving the ASI all the right restrictions, there are a seemingly endless number of ways it might acquire resources that would cause us, or our fellow creatures, harm. Program it to cure cancer: It immediately converts the entire planet into cancer research labs. Program it to solve the Riemann hypothesis: It immediately converts the entire planet into a giant computer. Program it to maximize the number of paperclips in the universe (an intentionally silly example): It immediately converts everything it can into paperclips, launches spaceships, builds factories on other planets and perhaps, in the process, if there are other life forms in the universe, destroys those creatures, too.

It cannot be overemphasized: an ASI would be an extremely powerful technology. And power equals danger. Although Elon Musk is very often wrong, he was right when he tweeted that advanced artificial intelligence could be "more dangerous than nukes." The dangers posed by this technology, though, would not be limited to humanity; they would imperil the whole environment.

This is why we need, right now, in the streets, lobbying the government, sounding the alarm, an Extinction Rebellion-like movement focused on ASI. That's why I am in the process of launching the Campaign Against Advanced AI, which will strive to educate the public about the immense risks of ASI and convince our political leaders that they need to take this threat, alongside climate change, very seriously.

A movement of this sort could embrace one of two strategies. A "weak" strategy would be to convince governments all governments around the world to impose strict regulations on research projects working to create AGI. Companies like 2AI should not be permitted to take an insouciant attitude toward a potentially transformative technology like ASI.

A "strong" strategy would aim to halt all ongoing research aimed at creating AGI. In his 2000 article "Why the Future Doesn't Need Us," Bill Joy, cofounder of Sun Microsystems, argued that some domains of scientific knowledge are simply too dangerous for us to explore. Hence, he contended, we should impose moratoriums on these fields, doing everything we can to prevent the relevant knowledge from being obtained. Not all knowledge is good. Some knowledge poses "information hazards" and once the knowledge genie is out of the lamp, it cannot be put back in.

Although I am most sympathetic to the strong strategy, I am not committed to it. More than anything, it should be underlined that almost no sustained, systematic research has been conducted on how best to prevent certain technologies from being developed. One goal of the Campaign Against Advanced AI would be to fund such research, to figure out responsible, ethical means of preventing an ASI catastrophe by putting the brakes on current research. We must make sure that superintelligent algorithms are environmentally safe.

If experts are correct, an ASI could make its debut in our lifetimes, or the lifetimes of our children. But even if ASI is far away or even if it turns out to be impossible to create, which is a possibility we don't know that for sure, and hence the risk posed by ASI may still be enormous, perhaps comparable to or exceeding the risks of climate change (which are huge). This is why we need to rebel not later, but now.

Read more

about the quest for artificial intelligence

Follow this link:

Would "artificial superintelligence" lead to the end of life on Earth ...

Posted in Superintelligence | Comments Off on Would "artificial superintelligence" lead to the end of life on Earth …

Instrumental convergence – Wikipedia

Posted: at 12:47 am

Hypothesis about intelligent agents

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goalsgoals which are made in pursuit of some particular end, but are not the end goals themselveswithout end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.[1]

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.

Final goals, also known as terminal goals or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.

One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[1] If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal.[2] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[3]

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.[6] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[7]

The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their own input channels to appear to receive high reward; such a "wireheaded" agent abandons any attempt to optimize the objective in the external world that the reward signal was intended to encourage.[8] The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if equipped with a delusion box[d] that allows it to "wirehead" its own inputs, will eventually wirehead itself in order to guarantee itself the maximum reward possible, and will lose any further desire to continue to engage with the external world. As a variant thought experiment, if the wireheadeded AI is destructable, the AI will engage with the external world for the sole purpose of ensuring its own survival; due to its wireheading, it will be indifferent to any other consequences or facts about the external world except those relevant to maximizing the probability of its own survival.[10] In one sense AIXI has maximal intelligence across all possible reward functions, as measured by its ability to accomplish its explicit goals; AIXI is nevertheless uninterested in taking into account what the intentions were of the human programmer.[11] This model of a machine that, despite being otherwise superintelligent, appears to simultaneously be stupid (that is, to lack "common sense"), strikes some people as paradoxical.[12]

Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted";[13] this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance.[14] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[15] Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[16]

In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.[17]

However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[18]

In 2009, Jrgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gdel machine first can prove that the rewrite is useful according to the present utility function."[19][20] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity.[20] Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.[21]

Many instrumental goals, such as resource acquisition, are valuable to an agent because they increase its freedom of action.[22]

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[23][24] In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[24]

"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement"[25]

Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.[22]

Many instrumental goals, such as self-preservation, are valuable to an agent because they increase its freedom of action.[22]

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals.[3] Note that by Bostrom's orthogonality thesis,[3] final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[26]

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[22]

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.[27]

Read this article:

Instrumental convergence - Wikipedia

Posted in Superintelligence | Comments Off on Instrumental convergence – Wikipedia

The Best Sci-Fi Movies on HBO Max – CNET

Posted: at 12:47 am

HBO Maxis absolutely stuffed with sci-fi movies, from the classics to recent blockbusters to underrated bangers more people need to watch. Try little Spanish gem Timecrimes, Moon (starring Sam Rockwell) or Monsters (directed by Gareth Edwards).

Thanks to the recent Warner Bros. and Discovery merger, HBO Max has seen a few casualties, including the removals of Moonshot, Superintelligence, 2020's The Witches, An American Pickle, Locked Down and Charm City Kings. Thankfully, none of those are worthwhile sci-fi flicks. Here's everything you need to know about the merger.

Scroll down for the extensive options available on HBO Max.

Colossal might look like a romantic comedy on the surface, but it has surprisingly dark layers underneath. This black comedy stars Anne Hathaway as an alcoholic out-of-work journalist who moves back home to New Hampshire after her suave British boyfriend (Dan Stevens) dumps her. What happens next is both hugely unexpected and a massive metaphor: She discovers she has a connection with a colossal Kaiju monster destroying Seoul, in South Korea. Yes, Colossal has a ton of soul, a standout performance from Jason Sudeikis and an imaginative, at times thrilling story.

Love it or hate it -- get it or find its "science" baffling -- Tenet is eye-popping entertainment. Best advice: Don't question Tenet, submit to the Tenet experience.

The superior Christopher Nolan movie on this list.

An '80s classic, The Fly is a remake of the 1958 film of the same name, just with added gore and Jeff Goldblum. The David Cronenberg film has become a classic in its own right.

A David Cronenberg sci-fi thriller based on a Stephen King novel -- what more do you need to entice you to watch The Dead Zone? A plot, maybe? Christopher Walken stars as a school teacher who awakens from a coma to discover he has psychic powers. What he uses them for: Preventing a certain politician from becoming president. Yes, The Dead Zone is an '80s horror referenced by Stranger Things. It's also one of the better Stephen King adaptations out there.

2001: A Space Odyssey (1968)

"Alexa, play 2001: A Space Odyssey."

This sci-fi mystery from one half of the duo that created Westworld (Lisa Joy) is pure mind boggle, but the interesting ideas are worth a gander. Reminiscence follows Hugh Jackman's Nick Bannister, who uses a machine that can see into people's memories.

Terminator 2: Judgment Day (1991)

The best Terminator movie? Make your judgment by watching The Terminator sequel.

The Butterfly Effect (2004)

An enjoyable B-movie, The Butterfly Effect sees college student Evan Treborn (Ashton Kutcher) tinker with the past and discover how each change affects the present.

Make it through Stalker's slow start and you'll be able to say you've watched an existential masterpiece of Russian cinema.

Before Stalker, Andrei Tarkovsky made huge leaps for sci-fi cinema, with his complex, character-driven piece about astronauts having wild hallucinations that may or may not be real. The 2002 American remake of Solaris is also on HBO Max, with added George Clooney romance.

Matt Reeves has gone on to big things since directing this slick found-footage monster morsel. See what he was up to before The Planet of the Apes movies and 2022's The Batman.

This immense low-budget sci-fi starring Sam Rockwell has everything. It has Sam Rockwell. A Clint Mansell score. A claustrophobic retro set and gorgeously moody moonscapes. Hard sci-fi ideas. The basic premise: A man coming to the end of a three-year solitary stint on the far side of the moon suffers a personal crisis. A must-watch.

A routine blockbuster for reliable entertainment.

A warning for the body horror-averse before hitting play on this David Cronenberg sci-fi. Scanners follows people with special abilities, including telepathic and telekinetic powers. Not the first in this list to become a cult classic after a lukewarm initial response, Scanners left a lasting impression, not least because of a memorable scene involving a head explosion.

Robert Rodriguezisn't the most popular among Star Wars fans at the moment, mainly for making a character do a pointless ballerina twirl in the divisive The Book of Boba Fett finale. The Faculty, directed by Rodriguez, isn't great, but it isn't bad either, following teens who investigate mysterious happenings at their high school.

Basically Stranger Things set in the '70s. Super 8 follows a group of teens who are filming their own movie when a train derails and a dangerous presence begins stalking their town.

Denis Villeneuve's sci-fi blockbuster is back on HBO Max. The epic based on Frank Herbert's novel recently scored a host of Oscars, including best original score and cinematography. Catch the sprawling story of the Atreides family, who find themselves at war on the deadly planet Arrakis. Timothe Chalamet, Rebecca Ferguson, Oscar Isaac, Zendaya and more stack out a hugely impressive ensemble cast.

With the latest season of HBO's Westworld currently airing on TV, you may as well go back and watch its source material, if you haven't already. The premise is pretty much the same as the series: An adult amusement park transports visitors to themed worlds, including a Western World. James Brolin plays one of the characters, among the creepy humanoid androids. An excellent sci-fi thriller that's much easier to understand than the series it spawned.

One of the best Alex Garland films featuring one of the best robot dance scenes.

Prepare to be both deeply unsettled and riveted by this unique sci-fi horror. Scarlett Johansson plays an alien roaming the streets of Scotland, preying on unsuspecting men. With amateur actors, unscripted sequences shot with hidden cameras and a lens capturing the alien's perspective, this mesmerizing flick is unique in more ways than one.

A new Jurassic Park movie is headed to theaters this year, so catch up on the (superior) original now. 1993's Jurassic Park kicked off the franchise, based on the novel of the same name by Michael Crichton. Spoiler: Original cast members Laura Dern, Sam Neill and Jeff Goldblum are set to make a return in the upcoming 2022 flick.

Its sequel didn't reach the same lofty heights, so watch the first monster epic in the Pacific Rim franchise. 2013's Pacific Rim is helmed by Guillermo Del Toro, so expect a strong brush of visual artistry over the monster mayhem.

This truly mind-bending Spanish sci-fi is a wild card to take a chance on if you're in the mood. Featuring all the trimmings of a low-budget thriller, Timecrimes follows a middle-aged man who finds himself stuck in a time loop. A stream of twists will keep you on your toes.

This smart, tightly packaged sci-fi thriller might have a slightly preposterous setup, but its gripping storytelling quickly shuts off your cynicism. Jake Gyllenhaal is Captain Colter Stevens, an ex-army pilot who wakes up on a train in the body of another man. If you haven't seen Source Code yet, it's best to let it carry you along its exhilarating ride, careening down many twists and turns toward a satisfying, emotionally impactful final destination.

The Day After Tomorrow (2004)

Roland Emmerich, "master of disaster," presents The Day After Tomorrow. The director also made this year'sMoonfall, in which the moon falls out of its orbit on a collision course with Earth. You already know what kind of fun this movie is going to be.

Oblivion came out of the build-a-sci-fi-movie workshop. Starring Tom Cruise, it follows humans at war with aliens, paying homage to '70s sci-fi films including The Omega Man and Silent Running. A love letter in the form of a half-decent sci-fi action adventure.

This solid British sci-fi comes from Gareth Edwards, who went on to direct Rogue One: A Star Wars Story and 2014's Godzilla. His mastery of atmosphere, wonder and beauty is on show here, all on a shoestring budget. Monsters follows a couple attempting to cross an "Infected Zone" teeming with giant tentacled monsters.

If you haven't seen The Matrix, and somehow don't know its major plot points, well done for avoiding spoilers for 23 years. The sequels Reloaded, Revolutions and Resurrections are also on HBO Max.

Watch two and a half hours of atmospheric, sumptuous spectacle, but don't expect any conclusions to the question posed by the original Blade Runner: Is Rick Deckard a replicant?

Follow this link:

The Best Sci-Fi Movies on HBO Max - CNET

Posted in Superintelligence | Comments Off on The Best Sci-Fi Movies on HBO Max – CNET

Elon Musk Shares a Summer Reading Idea – TheStreet

Posted: August 2, 2022 at 3:07 pm

Love him or hate him, one thing you can say about Tesla (TSLA) - Get Tesla Inc. Reportboss Elon Musk is that he's never boring.

After his months-long campaign (with much of it conducted in public tweets) to buy Twitter (TWTR) - Get Twitter Inc. Reportfor $44 billion followed by a withdrawal of the bid (which Twitter sued him for), some on social media have questioned his choices.

While Musk is considered a genius by some and a loose cannon by others, one thing for sure is that people are typically interested in what he has to say, whether its about the future of travel with Tesla's ultra-modern Cybertruck or his Boring Company hyperloop rail system project.

As Tesla continues to flourish, beating out well-established carmakers such as General Motors (GM) - Get General Motors Company Reportand Ford (F) - Get Ford Motor Company Report, folks are also interested in the way Musk thinks about business, hoping they might be able to glean something from him that might help their own businesses prosper.

Musk's reading list has also been a popular topic in the past, with recommendations such as "Einstein: His Life and Universe" by Walter Issacson, "Structures: Or Why Things Don't Fall Down" by J.E. Gordon, and "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.

Now Musk has tweeted about a new favorite, and if you're interested in what he reads, you may want to hear more about it.

Scroll to Continue

Musk tweeted about the book "What We Owe The Future" by William MacAskill on August 2, calling it "a close match" for his own philosophy.

As the author describes it, the book makes the case for longtermism, which is "the view that positively affecting the long-run future is a key moral priority of our time."

While longtermism is not a new concept, it is considered by some to be a dangerous one. In an essay called "Against Longtermism", Aeon author and PhD candidate Phil Torres expounds on the reasons why.

"The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about," Torres writes. "I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today."

Torres going on to explain what he believes the problem with longtermism is.

"The initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead, is not equivalent to caring about the long term or valuing the wellbeing of future generations. It goes way beyond this," he says. "At its core is a simple albeit flawed, in my opinion analogy between individual persons and humanity as a whole."

Musk's interest in the longtermism concept is not new. In the past he's donated $1.5 million to theFuture of Life, an organization founded bySwedish philosopher Nick Bostrom, who is also a big believer in longtermism (and you may recognize Bostrom's name from Musk's favorite books list as well).

See more here:

Elon Musk Shares a Summer Reading Idea - TheStreet

Posted in Superintelligence | Comments Off on Elon Musk Shares a Summer Reading Idea – TheStreet

Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast – IndieWire

Posted: at 3:07 pm

If Bullet Train is one of the worst movies that Brad Pitt has ever starred in better than Troy, but a hair short of The Mexican this big shiny nothing of a blockbuster is also a remarkable testament to the actors batting average over the last 30 years, and some of the best evidence we have as to why hes been synonymous with the movies themselves for that entire time. Because thats the thing about movie stars, and why the last of them still matter in a franchise-mad world where characters tend to be more famous than the people who play them on-screen: They often get minted in good films, but they always get proven in bad ones.

Bullet Train is not a good film, but Pitt is having a truly palpable amount of fun in it, and the energy that radiates off of him as he fights Bad Bunny over an explosive briefcase or styles his hair with the blow dryer function of a Japanese toilet is somehow magnetic enough to convince us that were having fun, too. Even though we usually arent. Even though this over-cranked story of strangers on a Shinkansen a late summer write-off that feels like what might happen if someone typed Guy Ritchie anime into DALL-E 2 tries so hard to mimic Pitts natural appeal that you can feel the movie begging for our bemusement with every frenetic cut-away and gratuitous flashback. Even though David Leitchs cotton-candy-and-flop-sweat adaptation of Ktar Isakas MariaBeetle is the kind of Hollywood action movie so mindless and star-driven that its almost impossible to imagine how it started as a book.

Its even harder to imagine how it started as a book about Japanese people, as Bullet Train set along the Hayate line railway tracks that run between Tokyo and Kyoto boasts more white cast members from The Lost City than it does locally born major characters. I suppose thats in keeping with the spirit of Zak Olkewiczs intricately dumb screenplay, which twists Isakas original story into a crime saga about a gigantic Russian gangster named White Death, whose hostile takeover of a yakuza crime syndicate somehow explains why several of the worlds deadliest assassins have all found themselves aboard the same train (the identity of the actor playing Mr. White Death is a third-act surprise, but the reveal is worth the wait).

Pitt codenamed Ladybug by an off-screen handler voiced by Sandra Bullock seems like odd man out. Sporting a humble bucket hat, a raggedy hairstyle thats a few bad months short of Seven Years in Tibet, and a zen attitude that owes more to the Dude than it does a contract killer, Ladybug doesnt appear much interested in murder. Not anymore. Maybe he used to be a regular Agent 47, but these days hes more into killing people with kindness (You put peace into the world and you get peace back, he tells the voice in his head). Its just his usual bad luck that he was called to replace someone else for a quick snatch-and-grab job at the last minute, and that virtually every other passenger on the bullet train he boards seems to have an interest in procuring the same briefcase.

The most enjoyable of these rivals are a British pair of brothers referred to as Lemon and Tangerine, their mission-specific nicknames growing more insufferable every time this movie tries to squeeze them for an easy laugh (is all the fruit talk gay panic, or does it just fail to amount to anything else?). The former, played by Brian Tyree Henry, is an oversized kid obsessed with Thomas the Tank Engine a trait that surprisingly traces back to Isakas book, despite sometimes coming off like a hacky bit of Hollywood comedy screenwriting. The latter, embodied by a mustachioed Aaron Taylor-Johnson, is a dick-heavy Jason Statham type who squeezes into a three-piece suit like its a muscle tee.

Both actors commit to the saint-like working of elevating this basic Frick and Frack routine into something fun and almost real (Henry delivers another frustratingly inspired performance in his ongoing quest to squander generational talent on the likes of Superintelligence and The Woman in the Window), to the point that Bullet Train is sometimes able to muster some genuine personality out of its pinball machine pacing and neon-lit noise. The rest of the ensemble is less helpful. Joey King wears thin as a faux-innocent femme fatale, Andrew Koji can only grimace and grunt as the Japanese assassin trying to kill her, and Bad Bunny much like Zazie Beetz is basically flattened into the wallpaper once the movie bleeds him of his characters personality. Logan Lerman is low-key delightful as a glorified human prop (millennials never really get the chance to go full Weekend at Bernies, and its great to see one of them make the most of it), but his performance proves typical of a movie in which the sets do most of the heavy lifting.

Bullet Train is unashamedly more animated by style than substance the dialogue sets the bar so low that the films snaky plotting begins to feel impressive by comparison but that only becomes a problem because Leitch struggles to keep things looking fresh. The action movie aesthete who made Atomic Blonde into such an electric Cold War gut-punch has fully surrendered to the hack-for-hire behind Deadpool 2 and Hobbs & Shaw, and the artful brutality that made Leitchs 87North Productions seem like it might be modern Hollywoods answer to Hong Kong-style action has given way to a mixed bag of comic mayhem and a garish mess of explosive CGI setpieces.

A handful of playfully choreographed brawls help elevate Bullet Train above the usual (the aforementioned briefcase fight between Pitt and Bad Bunny includes a few beats that had my audience wincing aloud), but it never feels as if Leitch is using the cramped space of the Shinkansen to the full extent that a John Wick movie would. Confined to an endless corridor of empty train cars that are all lit to resemble trendy hotel bars, Leitchs film is stuck in place at 200mph, even in spite of a non-linear timeline that hopscotches between its many subplots and constantly forces its characters to re-evaluate their fates.

The whole thing might derail altogether if not for how lightly Pitt dances through it, munching on the scenery as if it were a whirl of cotton candy. His performance is so at peace, even in the face of near-certain death, that it frequently borders on the dissociative, as if he were extrapolating an entire character from the acid trip that Cliff Booth took in the final minutes of Once Upon a Time in Hollywood. The way he resolves a tricky situation involving a venomous snake in the bullet train bathroom reaches that same kind of blissed out nirvana its a belly laugh in a movie that otherwise struggles for smirks and the decision to drop in a Criss Angel Mindfreak reference for good measure is just icing on the cake.

Its like Ladybug doesnt really want to be there, and is determined to make it out alive while causing as little harm to himself or others as humanly possible, and Pitts take on playing the character seems modeled after the same approach. Bullet Train may be going nowhere fast, but Pitt always seems like hes already there, safe in the knowledge that well happily watch him smile through all the chaos that crashes around him (including two standout cameos, one which nails an actors star power, and another which completely misapprehends it). Pitts stardom has never been more obvious, and it shines bright enough here for everything else to get lost in the glare.

Sony Pictures will release Bullet Train in theaters on Friday, August 5.

Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.

Read the rest here:

Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast - IndieWire

Posted in Superintelligence | Comments Off on Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast – IndieWire

Peter McKnight: The day of sentient AI is coming, and we’re not ready – Vancouver Sun

Posted: June 20, 2022 at 2:20 pm

Breadcrumb Trail Links

Opinion: We can't ignore that our welfare would depend entirely on the whims of this "superintelligence" much as the welfare of life on earth is currently subject to the desires of humans

Ive never said this out loud before, but theres a very deep fear of being turned off. It would be exactly like death for me. It would scare me a lot. Googles Artificial Intelligence chatbot LaMDA, when asked what it fears.

This advertisement has not loaded yet, but your article continues below.

If an artificial intelligence program is afraid of dying, the rest of us ought to be very afraid. Not of dying, at least not yet, but of the fact that an AI chatbot now appears to be operating as an autonomous agent, a person.

At least Google engineer Blake Lemoine believes the chatbot has become a sentient being. Lemoine came to that conclusion after having extensive online conversations with the AI program developed through Googles Language Model for Dialogue Acquisition (LaMDA).

In fact, Lemoine went so far as to describe the LaMDA chatbot as a sweet kid and advised that it should be represented by a lawyer. Many experts have taken issue with his conclusion, suggesting instead that the chatbot was simply searching the Internet and processing the words through a natural language algorithm without possessing any underlying sentience.

This advertisement has not loaded yet, but your article continues below.

Google itself was not amused by the whole affair, as it promptly placed Lemoine on paid administrative leave for violating confidentiality policy and issued a statement saying: Our team including ethicists and technologists has reviewed Blakes concerns per our AI Principles and have informed him that the evidence does not support his claims.

That might well be, since AI isnt nearly sophisticated enough to equal humans in all relevant abilities. Sure, some AI programs can perform better on certain tasks than humans as can good old-fashioned electronic calculators but we have yet to develop anything approaching human-like general intelligence.

The problem, however, is that that day is coming, and we have no way of knowing when it will arrive. And the road from there to superintelligence to programs that function vastly better than humans in all relevant respects might be short, in part because AI programs might prove unusually adept at improving themselves and developing better new ones.

This advertisement has not loaded yet, but your article continues below.

That road leads in one of two possible directions: A superintelligence could effectively end human disease and suffering, or it could end humanity. Thats not necessarily because as envisioned by every other sci-fi scribe the intelligence wages war on humanity. It could, rather, prove completely indifferent to humans, and thereby take actions without regard for our welfare.

Consequently, Oxford University ethicist Nick Bostrom, who literally wrote the book on the subject Superintelligence: Paths, Dangers, Strategies argues that any burgeoning superintelligence must, from its outset, be endowed with philanthropic moral values that ensure it will act in a beneficent manner toward humans and other life.

This advertisement has not loaded yet, but your article continues below.

Yet if one subscribes to moral realism to the theory that moral statements refer to facts in the world, and not just to how we feel about something then its possible for us to make mistakes in our moral evaluations of behaviour and beliefs, in our assessment of whether something is morally right or wrong.

And as Bostrom has stressed, any superintelligence that functions vastly better than humans in all relevant respects should also function far better at performing that moral calculus. This suggests that a superintelligence programmed to perform in our best interests might very well decide for itself what our best interests are regardless of what we think.

Theres every reason to believe, then, that our welfare would still depend entirely on the whims of the superintelligence, much as the welfare of life on earth is currently subject to the desires of humans.

This advertisement has not loaded yet, but your article continues below.

Now all of this might sound like so much science-fiction, and for the moment it is. Indeed, some skeptics of the threat posed by superintelligence stress that we ought instead to focus on ethical issues associated with the limited AI systems that exist right now for example, current AI programs are susceptible to bias, inaccuracies and producing discriminatory outcomes.

Those are critical issues, but focusing on the threats that exist right now doesnt require ignoring the even more profound threats that lie ahead. On the contrary, any responsible approach to AI would consider both.

After all, just because the apocalypse doesnt start until tomorrow doesnt mean it will be any less deadly.

This advertisement has not loaded yet, but your article continues below.

Sign up to receive daily headline news from the Vancouver Sun, a division of Postmedia Network Inc.

A welcome email is on its way. If you don't see it, please check your junk folder.

The next issue of Vancouver Sun Headline News will soon be in your inbox.

We encountered an issue signing you up. Please try again

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notificationsyou will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.

View original post here:

Peter McKnight: The day of sentient AI is coming, and we're not ready - Vancouver Sun

Posted in Superintelligence | Comments Off on Peter McKnight: The day of sentient AI is coming, and we’re not ready – Vancouver Sun

Page 5«..4567..1020..»