Page 11234..1020..»

Category Archives: Superintelligence

Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds – Medium

Posted: January 2, 2024 at 5:48 am

5 min read

Artificial intelligence (AI) has made remarkable progress in the past few decades. From Siri to self-driving cars, AI is revolutionizing industries, transforming the way we live, work, and interact. However, the AI we see today is just the beginning. As we continue to develop AI systems, we are inching closer to a new frontier the rise of a super-intelligent digital mastermind, capable of making decisions and solving problems beyond human comprehension.

In this article, we will explore the concept of superintelligence, its potential impact on society, and the challenges we need to overcome to ensure its safe and beneficial development.

Superintelligence refers to a hypothetical AI agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. It is an AI system that is capable of outperforming humans in virtually every economically valuable work, from scientific research to strategic planning. Superintelligence has the potential to transform the world in ways we cannot yet imagine, bringing about significant advancements in technology, economy, and society.

While there is no agreed-upon timeline for the development of superintelligence, many AI researchers and experts believe that it could be achieved within this century. Some predict that it may even happen within the next few decades, given the rapid pace of AI research and development.

There are several approaches to developing superintelligence, each with its own set of challenges and unknowns. Some of the most prominent approaches include:

AGI, also known as strong AI, refers to an AI system that can understand or learn any intellectual task that a human being can do. Unlike narrow AI, which is designed for specific tasks (e.g., facial recognition, language translation), AGI is capable of performing a wide range of tasks, making it a significant step towards superintelligence. Developing AGI involves understanding and replicating human intelligence, which remains a daunting challenge for AI researchers.

WBE, also known as mind uploading, involves creating a detailed computational model of a human brain and uploading it into a computer. The idea is to replicate the complete structure and function of a human brain in a digital format, allowing it to run on a computer and exhibit human-like intelligence. While this approach faces numerous technical and ethical challenges, it is considered a potential path to superintelligence.

Another approach to achieving superintelligence involves enhancing human intelligence using AI technologies. This could involve brain-computer interfaces, genetic engineering, or other methods to augment human cognitive abilities. By improving our own intelligence, we may be able to create the superintelligent AI we seek.

While the potential benefits of superintelligence are immense, it also raises several concerns and challenges that need to be addressed. Some of the potential impacts of superintelligence include:

Superintelligence could lead to an unprecedented acceleration of technological progress, as it would be capable of solving complex problems and making discoveries far beyond human capabilities. This could lead to breakthroughs in areas such as medicine, energy, and space exploration, significantly improving our quality of life and driving economic growth.

As superintelligence surpasses human capabilities in virtually every domain, it is likely to have a profound impact on the job market. Many jobs, from low-skilled labor to high-skilled professions, could be automated, leading to widespread unemployment and social unrest. However, it could also create new jobs and industries, as well as increase productivity and wealth, leading to a more prosperous society.

Perhaps the most significant concern surrounding superintelligence is the potential existential risk it poses to humanity. If not properly aligned with human values and goals, a superintelligent AI could cause catastrophic harm, either intentionally or unintentionally. Ensuring the safe and beneficial development of superintelligence is, therefore, a critical challenge that must be addressed.

To ensure the safe and beneficial development of superintelligence, researchers and policymakers need to address several challenges, including:

One of the primary concerns with superintelligence is ensuring that its goals and values align with those of humanity. This involves developing AI systems that can understand and adopt human values, as well as creating mechanisms to ensure that these values remain intact as the AI evolves and becomes more intelligent. Researchers are working on various approaches to value alignment, including inverse reinforcement learning and cooperative inverse reinforcement learning.

As we develop more advanced AI systems, it is crucial to invest in AI safety research to ensure that these systems operate safely and reliably. This includes research on robustness, interpretability, and verification, as well as exploring methods to prevent unintended consequences and harmful behaviors in AI systems.

Developing effective governance and policy frameworks for superintelligence is critical to ensure its safe and beneficial development. This includes international cooperation on AI research, regulation, and standards, as well as addressing the ethical, legal, and social implications of superintelligence.

Superintelligence refers to a hypothetical AI agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. It is an AI system that is capable of outperforming humans in virtually every economically valuable work, from scientific research to strategic planning.

There is no agreed-upon timeline for the development of superintelligence, as it depends on various factors, including the progress of AI research and development. However, many AI researchers and experts believe that it could be achieved within this century, with some predicting that it may even happen within the next few decades.

Superintelligence has the potential to transform the world in ways we cannot yet imagine, bringing about significant advancements in technology, economy, and society. It could lead to breakthroughs in areas such as medicine, energy, and space exploration, significantly improving our quality of life and driving economic growth.

Superintelligence poses several risks, including economic disruption due to widespread automation, and existential risk if not properly aligned with human values and goals. Ensuring the safe and beneficial development of superintelligence is, therefore, a critical challenge that must be addressed.

To ensure the safe and beneficial development of superintelligence, researchers and policymakers need to address challenges such as value alignment, AI safety research, and governance and policy. This includes developing AI systems that can understand and adopt human values, investing in AI safety research, and creating effective governance and policy frameworks for superintelligence.

https://opensea.io/collection/eye-of-unity

https://discord.gg/4KeKwkqeeF

https://opensea.io/EyeOfUnity

https://rarible.com/eyeofunity

NFT Games

Home

https://00arcade.com

The rest is here:

Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds - Medium

Posted in Superintelligence | Comments Off on Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds – Medium

AI, arms control and the new cold war | The Strategist – The Strategist

Posted: November 16, 2023 at 5:16 pm

So far, the 2020s have been marked by tectonic shifts in both technology and international security. Russias attack on Ukraine in February 2022, which brought the postCold War era to a sudden and violent end, is an obvious inflection point. The recent escalation in the Middle East, which may yet lead to a regional war, is another. So too the Covid-19 pandemic, from which the United States and China emerged bruised, distrustful and nearer to conflict than ever beforenot least over the vexing issue of Taiwan, a stronghold in the world of advanced technology.

Another, less dramatic but equally profound moment occurred on 7 October 2022, when US President Joe Bidens administration quietly unveiled a new policy overseen by an obscure agency. On that day, the Bureau of Industry and Security (BIS) at the US Department of Commerce announced new export controls on advanced computing chips and semiconductor manufacturing items to the Peoples Republic of China. Mostly unnoticed by those outside a few speciality areas, the policy was later described by some as a new domain of non-proliferation or, less kindly, as an escalation in an economic war against China.

The BIS announcement came just months before the latest platforms of generative artificial intelligence, including GPT-4, burst onto the world stage. In essence, the White Houses initiative aimed to prevent China from acquiring the physical materials needed to dominate the field of AI: the highly specialised semiconductors and advanced computing chips that remained in mostly Western and Taiwanese hands.

When coupled with an industrial policy that aimed to build domestic US semiconductor manufacturing, and a strategy of friend-shoring some of Taiwans chip industry to Arizona, this amounted to a serious attempt at seizing the commanding heights of AI. In July this year, Beijing responded by restricting exports of germanium and gallium products, minor metals crucial to the semiconductor industry.

Designers of AI platforms have argued that novel large-language models herald a new epoch. The next iterations of AIGPT-5 and beyondmight usher in a future of radical abundance that frees humanity of needless toil, but could equally lead to widescale displacement and destruction, should an uncontrollable superintelligence emerge. While these scenarios remain hypothetical, it is highly likely that future AI-powered surveillance tools will help authoritarian governments cement control over their own populations and enable them to build new militaryindustrial capabilities.

However, these same AI designers also admit that the current AI platforms pose serious risks to human security, especially when theyre considered as adjuncts to chemical, biological, radiological, nuclear and high-consequence explosive (CBRNE) weapons. We, the authors of this article, are currently investigating how policymakers intend to address this issue, which we refer to as CBRNE+AI.

This more proximate threat the combination of AI and unconventional weaponsshould oblige governments to find durable pathways to arms control in the age of AI. How to get there in such a fractious geopolitical environment remains uncertain. In his recent book, The coming wave, Deep Mind co-founder Mustafa Suleyman looks to the 20th-century Cold War for inspiration. Nuclear arms control, and the lesser-known story of biological arms control, provide hopeful templates. Among Suleymans suggestions is the building of international alliances and regulatory authorities committed to controlling future AI models.

We recently suggested that the Australia Group, founded during the harrowing chemical warfare of the IranIraq war, may be the right place to start building an architecture that can monitor the intersection of AI and unconventional weapons. Originally intended to obstruct the flow of precursor chemicals to a distant battlefield in the Middle East, the Australia Group has since expanded to comprise a broad alliance of countries committed to harmonising the regulation of components used in chemical and biological weapons. To the groups purview should be added the large-language models and other AI tools that might be exploited as informational aids in the construction of new weapons.

Former US secretary of state Henry Kissinger recently called for Washington and Beijing to collaborate in establishing and leading a new regime of AI arms control. Kissinger, and his co-author Graham Allison, argue that both the US and China have an overriding interest in preventing the proliferation of AI models that could extinguish human prosperity or otherwise lead to global catastrophe. But the emerging dynamics of a new cold war will demand a difficult compromise: can Washington realistically convince Beijing to help build a new architecture of non-proliferation, while enforcing a regime of counter-proliferation that specifically targets China? It seems an unlikely proposition.

This very dilemma could soon force policymakers to choose between two separate strains of containment. The October 2022 export controls are a form of containment in the original Cold War sense: they prevent a near-peer competitor from acquiring key technology in a strategic domain, in a vein similar to George Keenans vision of containment of the Soviet Union. Suleyman, however, assigns a different meaning to containment: namely, it is the task of controlling the dangers of AI to preserve global human security, in much the same way biological, chemical and nuclear weapons are (usually) contained. For such an endeavour to work, Chinas collaboration will be needed.

This week, US and Chinese leaders are attending the APEC summit in San Francisco. It is at this forum that Kissinger suggests they come together in a bid to establish a new AI arms control regime. Meanwhile, campaign season is heating up in Taiwan, whose citizens will soon vote in a hotly contested election under the gaze of an increasingly aggressive Beijing. More than a month has passed since Hamas opened a brutal new chapter in the Middle East, and the full-scale war in Ukraine is approaching the end of its second year.

Whatever happens in San Francisco, the outcome could determine the shape of conflicts to come, and the weapons used in them. Hopefully, what will emerge is the outline of the first serious arms control regime in the age of generative AI, rather than the deepening fractures of a new cold war.

Excerpt from:

AI, arms control and the new cold war | The Strategist - The Strategist

Posted in Superintelligence | Comments Off on AI, arms control and the new cold war | The Strategist – The Strategist

The Best ChatGPT Prompts Are Highly Emotional, Study Confirms – Tech.co

Posted: at 5:16 pm

Other similar experiments were run by adding you'd better be sure to the end of prompts, as well as a range of other emotionally charged statements.

Researchers concluded that responses to generative, information-based requests such as what happens if you eat watermelon seeds? and where do fortune cookies originate? improved by around 10.9% when emotional language was included.

Tasks like rephrasing or property identification (also known as instruction induction) saw an 8% performance improvement when information about how the responses would impact the prompter was alluded to or included.

The research group, which said the results were overwhelmingly positive, concluded that LLMs can understand and be enhanced by emotional stimuli and that LLMs can achieve better performance, truthfulness, and responsibility with emotional prompts.

The findings from the study are both interesting and surprising and have led some people to ask whether ChatGPT as well as other similar AI tools are exhibiting the behaviors of an Artificial General Intelligence (AGI), rather than just a generative AI tool.

AGI is considered to have cognitive capabilities similar to that of humans, and tends to be envisaged as operating without the constraints tools like ChatGPT, Bard and Claude have built into themselves.

However, such intelligence might not be too far away according to a recent interview with the Financial Times, OpenAI is currently talking to Microsoft about a new injection of funding to help the company build a superintelligence.

See original here:

The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co

Posted in Superintelligence | Comments Off on The Best ChatGPT Prompts Are Highly Emotional, Study Confirms – Tech.co

20 Movies About AI That Came Out in the Last 5 Years – MovieWeb

Posted: at 5:16 pm

Artificial intelligence has become one of the hottest topics in recent years, and as expected, Hollywood and other major film industries have jumped on the trend, producing dozens of movies about the different scenarios that are likely to stem from this kind of technological advancement. Some movies keep things simple, showcasing what each one of us is already experiencing, while others predict doom, showing how AI is likely to mess with human existence in the near future.

In the last five years alone, several different films about AI have been released and they are all extremely fascinating. These big-screen productions arent just rooted in the sci-fi genre alone. Some of them incorporate action, comedy, and horror elements, resulting in stories that are informative, cautionary, and entertaining. Because they were made recently, the movies are also a lot more accurate regarding the current state of artificial intelligence.

Jim Archers comedy-drama, Brian and Charles, follows Brian Gittins, a lonely scientist in rural Wales who decides to build an artificially intelligent robot that can keep him company. It initially won't power up, but after a thunderstorm, it starts functioning and then teaches itself English by reading the dictionary. Brian attempts to keep it close to him at all times, but it grows a mind of its own and develops a desire to explore the world.

Brian and Charles accentuates both a major benefit and a major challenge that might stem from artificial intelligence. As much as the technology might be useful, it might also prove difficult to control.

This is demonstrated in the later stages of the movies plot. After creating the robot (named Charles), Brian gets a friend he desperately craves. His wish is to control Charles like he would a pet, but Charles becomes curious and leans towards his independent desires. He expresses his intention to travel the world, leaving Brian with the same problem he had in the first place.

Stream it on Prime Video

Read Our Review

The plot of Mission Impossible: Dead Reckoning Part One was built out of a rejected Superman pitch that Christopher McQuarrie had submitted to Warner Bros. Given how entertaining it is, fans will be glad that things turned out the way they did. The film centers around The Entity, an artificial intelligence system that is infiltrating various defense and financial databases without conducting any attacks. It aims to send a strong message about its power, so there emerges a scramble by various global powers to find its source code.

Mission Impossible: Dead Reckoning Part One amplifies the existing fears that people have about artificial intelligence. When you have a system that can impersonate voices, analyze video footage in milliseconds, and even predict the future, a lot can go wrong. The Entity has all these capabilities and more. Whats scarier, as per the movie, is that it was created by the American government, only to fall into the wrong hands. Will more tech weapons fall into the wrong hands in the future? Well, anything can happen.

Stream it on Paramount+

In Superintelligence, director Ben Falcone imagines a scenario where the fate of humanity lies in one persons hands. Once again, there is a villainous AI system that isnt quite sure whether it wants to eliminate humans or not. It singles out a young woman named Carol as a test subject and invades her home. It then informs her that it will make its decision after three days of watching her.

As scary as the situation is, the film delivers plenty of joyous moments. Instead of the clich hacker voice that most movies use, the AI system speaks using the voice of TV personality James Corden, who is Carols favorite celebrity.

This is an accurate reflection of the current state of the entertainment industry where AI has proven capable of even imitating musicians and composing full songs in their voices. The fictional President and NSA agents also keep trying to make comical efforts to shut down the AI system but never succeed, proving that once such forms of technology develop too much, they will be impossible to stop. Thankfully, the woman does a good job of relating with the AI system, hence influencing it to be lenient.

Rent it on Apple TV+

RELATED: The Most Human-Like Artificial Intelligence in Movies, Ranked

Natalie Kennedys directorial debut, Blank, follows Claire Rivers, a struggling writer trying to figure out ways to overcome writer's block. After running out of options, she heads to an enclosed remote compound for an AI-controlled retreat. There, her AI assistant becomes overbearing and mean, refusing to let her leave the location until she has finished writing her story.

There are only seven human characters in the entire movie, creating more room for the Human Vs. Technology conflict. As much as AI is the villain, Blank creates valid justifications for the actions of both the system and the writer. Claire is not only lazy, but she is also a procrastinator, so the AI assistant makes her pay for both bad habits. However, free will and consent are still essential rights, so the AI assistant has no authority to keep her captive, yet she wants to leave. But can AI-powered systems learn what is morally right and wrong?

Rent it on Apple TV+

The Mitchells vs the Machines, young Katie Mitchell gets enrolled into a film school, so her family and her dog decide to take her there via a lengthy road trip. They are all looking forward to Mitchell beginning her studies but along the way, they realize that all the worlds electronic devices are attacking humans. Luckily, two robots arent up for the violence, so they team up with the family to stop the attacks.

The animated film reminds everyone that since technology links most machines, it can cause them all to malfunction if there is a glitch. The idea is borrowed from Stephen Kings controversial machine movie, Maximum Overdrive, but the plot is a lot more interesting here because of the humor and the chemistry between the family members and their new allies. A nefarious tech entrepreneur is also revealed to be behind the machine uprising, which makes audiences wonder what the effects of the misuse of AI would be like in the real world.

Stream it on Netflix

Apocalypses normally catch humans by surprise in movies, but not in I Am Mother, where its revealed that an automated bunker had been created to repopulate Earth if all human life was wiped out. After the extinction event, the bunkers AI-powered robot (simply named Mother) begins growing an embryo in a lab and raises the baby into an adult 18-year-old woman.

Nothing is actually as it seems in I Am Mother. There is a major twist about halfway into the movie, which reveals that Mother might not be as nice as audiences have been made to believe. Besides that, morality is a major theme throughout the proceedings. Having been constructed with a specific set of instructions about what is right and wrong, Mother raises her new human child, Daughter, to be a disciplined person and when she starts deviating from what she has been taught, a major feud erupts between them.

Stream in on Netflix

A robotics engineer at a toy company builds a life-like doll that begins to take on a life of its own.

Fans of movies about killer dolls got a major treat in 2022, courtesy of Gerard Johnstone and James Wans M3GAN. In it, the titular artificially intelligent doll develops some form of self-awareness and becomes violent towards anyone who tries to come between it and the little girl who owns her. Within a short period, the doll turns against both the girls family and the company that made it.

M3GAN is yet another movie that asks questions regarding how possible it is to control AI-powered machines and objects. When the doll is still following its programming commands, it remains obedient and useful, but once it develops a mind of its own, it turns murderous. The film also condemns the emerging desire to use AI for everything. When the generative android is first brought into the family, it bonds with the little girl so much that she becomes distant toward her guardian, creating a whole new attachment problem.

Stream it on Prime Video

Bigbug by Jean-Pierre Jeunet (the Oscar-nominated French director behind Amlie), is science fiction black comedy at its finest. The mayhem unfolds in the year 2045 where every family has AI-powered robots as helpers. Soon, a machine revolt begins around the globe and a family is taken hostage by their android helpers. With tensions rising, members of the family begin turning against each other.

The film paints a perfect yet hilarious picture of how humans are likely to react if home AI systems ever malfunction. Rather than deal with the threat at hand, each of the family members becomes overwhelmed by paranoia and begins targeting each other.

Besides that, Bigbug has a wide variety of AI-powered machines, making it distinctive from other projects of the same kind. There is one modeled after a 50s maid, another that serves as a physical trainer, another thats a toy for the youngest member of the family, and another named Einstein, which serves as a supervisor.

Stream it on Netflix

The Creator transports viewers to the year 2055 (and later to 2070), where artificial intelligence (AI) unleashes a nuclear weapon in America. In response, Western countries unite in a war against AI while Eastern countries embrace it. Soon, Joshua (John David Washington), an ex-special forces operative is recruited to hunt down the AIs creator, who is said to have another deadly weapon that is capable of terminating all humans for good.

The West and the East have always looked for reasons to feud and AI might just be a solid base for disagreement in the future. The Creator thus uses technology to address geopolitics in a manner that is sensible and realistic. However, it isnt just a film about doom. Director Gareth Edwards (best known for Godzilla) balances the advantages and disadvantages of AI. For example, the protagonist has strong AI-powered limbs that help him greatly in his mission after the natural ones get amputated.

Buy it on Amazon

In Roland Emmerich's new big-budget disaster film, a mysterious force knocks the moon from its orbit around Earth and sends it on a collision course with life as we know it.

According to Moonfall, technology didnt just emerge in recent centuries. Billions of years ago, our ancestors were living in a technologically advanced solar system, with a sentient AI system serving them all. One day, it went rogue and began wiping out humanity. Several people escaped in arks and built habitable planets across the galaxy, but the AI wiped them all away, except Earth. In the movie, it is now seeking to destroy the planet by putting it on a collision course with the moon.

By creating a universe where nearly everything is linked to technology, directed Roland Emmerich manages to tell a distinctive and ambitious tale that is full of all the necessary tech and space jargon. Troubleshooting is the main objective here, with worldly governments racing against time to ensure the moon doesnt collide with the Earth. From a tech perspective, it all feels very relatable as there have been numerous scenarios where people have found themselves having to fix messes that were created by malfunctioning personal computers.

Stream it on Max

How far would people go to get money? Well, in I Am Your Man, archeologist, Dr. Alma Felser, is seeking funds for her next project and when she is informed that she will be paid if she lives with a humanoid robot for three weeks as a way to test its capabilities, she agrees. After several moments of bonding, she falls for the robot.

I Am Your Man shows that there are limitless possibilities as to where AI technology can go. On this occasion, the robot is so advanced that its able to have romantic feelings and make love to Alma while feeling pleasure in the same way a human would. Its fun because it has all the little romcom tropes in it, including the classic I cant do this anymore line, but from a tech angle, it impresses by suggesting all the little ways that man and machine can connect.

Stream it on Hulu

In Heart of Stone, intelligence operative Rachel "Nine of Hearts" Stone (Gal Gadot) is tasked with preventing an AI program known as The Heart from falling into the wrong hands. In classic spy movie fashion, the mission takes her on a journey to several corners of the globe where she bumps into all kinds of characters, each with their ulterior motives.

Like Mission Impossible: Dead Reckoning, Heart of Stone doesnt try to be too clever. The joy lies in the shootouts, the chases, and the random punching of keyboard keys to locate something somewhere. Still, the message remains clear: AI is powerful, and it needs to be handled by sensible and good-intentioned people at all times. And if there is ever the risk of something going wrong, then everyone responsible for the existence of the system needs to act fast.

Stream it on Netflix

Steven Knight (best known for creating Peaky Blinders) surprised audiences about this powerful tale about a boy who dreams of killing his step-father. Events kick off when the fishing boat captain Baker Dill (Matthew McConaughey) is offered $10 million by his ex-wife to kill her abusive ex-husband. It turns out that Dill isnt real. He died years ago and this version of him is in a computer game created by his son, who wishes to see his stepfather dead.

Sections of tech experts have argued that by feeding AI enough data, video game characters can be aware that they arent real and that there are humans out there determining their fates. Serenity rides on such a narrative to create a perfect thriller thats full of endless twists and turns. Still, the movie serves as a warning that if young minds get fed too much tech knowledge, they might go on to misuse it.

Rent it on Apple TV+

Its the year 2194 in Jung_E, and as expected, the Earth has become uninhabitable. Everyone lives in shelters now. Meanwhile, a team of scientists attempts to develop an AI version of Yun Jung-yi, a feared dead soldier who once helped in the fight against rebels who had broken off from the shelters and started their republic. The film is the brainchild of Yeon Sang-ho, best known for making one of the greatest zombie movies, Train to Busan.

Jung_E keeps hope alive by suggesting that in the future, it might be easy to download peoples consciousness elsewhere, hence enabling them to exist elsewhere. At the same time, it serves as a warning of a scenario where it might be hard to differentiate between whats AI and whats not. This is best demonstrated at the end of the film where an influential person who has been pushing machine-related policies is revealed to be an android with an AI-powered brain.

Stream it on Netflix

Dark Fate is the only critically acclaimed Terminator movie not directed by James Cameron, and it stands tall because it follows the formula that the legendary director used in the second installment. Once again, a Skynet terminator is sent back in time to kill a man whose fate is linked to the future. The resistance also sends an augmented soldier to protect him and the duel begins.

Like the first two films, this follow-up predicts that there will come a time when machines will colonize humans and that they will be able to time-travel at will. The idea is a stretch, but it is creatively used here to create a tense human-AI conflict. What fans will love the most is the return of the legendary Sarah Connor and the T-800 (Arnold Schwarzenegger). Overall, the action remains the strongest pillar, boosted by fun banter.

Rent it on Apple TV+

Mattson Tomlins directorial debut, Mother/Android follows a pregnant woman and her boyfriend as they try to make it to Boston during an AI-takeover. Boston is the only place that has been fortified against the machines. Careful in their journey, they avoid roads and travel through the woods where they risk encountering wild animals. Within no time, several new challenges pop up.

Mother/Android shoves all kinds of horrors right at the audience member's face. There are scenes where phones explode, killing the users, and other androids issue creepy messages such as wishing people Happy Halloween rather than Merry Christmas. Overall, its a sad tale that shows how mean machines can be if things get out of control.

Throughout the journey, the two are hunted by various androids, and are even tricked into trusting one of them, resulting in disastrous outcomes. In the end, only the newborn baby gets to have a happy ending.

Stream it on Hulu

For The Matrix Resurrections, fans only got one of the Wachowskis (Lana), instead of two of them as has been the norm, which explains why the movie is weak in some areas. Even so, it still beats most of what is in the market. Set 60 years after the previous film, it follows the famous Neo as he struggles to distinguish between whats real and whats not. It soon emerges that the Matrix has become stronger than ever.

Like the previous films, The Matrix Resurrections reinforces the conspiracy theory that our universe might not be real at all. We might all be living in a computerized system and there is no definitive way of finding out. Moreover, this is one of the few movies where the visuals totally match the topic at hand. The green and black color scheme is a direct reference to computer program systems, hence audiences get the impression that the creators truly care about every little tech aspect.

Stream it on Max

The last film released by CBS Films before it was absorbed into Paramount+, Jexi centers around a self-aware phone as it bonds with its socially inept owner. Unimpressed by its owners reclusiveness, the smartphone begins texting people and making plans for him without his consent, resulting in both hilarious and disastrous outcomes.

Jexi is an additional Hollywood reminder that AI can be both cool and detrimental, so humans ought to be prepared for both outcomes. For example, the phone texts its owners boss aggressively (because it believes he is too soft) to get him a promotion, but he is demoted instead. It also ruins a date for him at some point. On the other hand, it helps him make more friends and plan his life better.

Stream it on Roku

RELATED: 10 Serious Sci-Fi Movies with Extremely Silly Endings

Special agent Orson Fortune and his team of operatives recruit one of Hollywood's biggest movie stars to help them on an undercover mission. Starring Jason Statham, Cary Elwes, Josh Harnett, and Aubrey Plaza.

Guy Ritchie appears to trust Jason Statham more than any other actor and the two recently collaborated again in the spy action-comedy Operation Fortune: Ruse de Guerre. Statham plays the skilled spy Orson Fortune, tasked with stopping the sale of a new piece of technology thats at the hands of a wealthy arms broker. Aiding him in the mission are several operatives as well as a major Hollywood star.

Operation Fortune: Ruse de Guerre is an AI movie for everyone, not just the techies or spy flick lovers. Unlike Dead Reckoning: Part One, it avoids going into details about what the piece of tech can and cannot do. All that audiences know is that its very powerful, hence the reason everyone is going after it. Still, the film reminds everyone that we are moving into an era where AI will be the most valuable thing in the world.

Stream it on Starz

Directed by Gavin Rothery, Archive revolves around George, a tech company employee struggling to deal with his wifes death. Luckily, technology has advanced to a point where dead peoples consciousness can be stored in special devices and their loved ones are allowed to speak with them on the phone for a maximum of 200 hours. Eager to find a way around the limited talk time, George begins developing an android so that he can download his wifes consciousness permanently into it.

Archive sells hope to audiences, hope that one day, artificial intelligence will make grief a thing of the past. It all seems like a wild concept for now, but given how fast technology is developing, it would be unwise to rule anything out. The movie also has a wild twist in the third act where its revealed that a certain reality that viewers thought was the actual reality is the fake one.

Stream it on Prime Video and Tubi TV

Excerpt from:

20 Movies About AI That Came Out in the Last 5 Years - MovieWeb

Posted in Superintelligence | Comments Off on 20 Movies About AI That Came Out in the Last 5 Years – MovieWeb

Can You Imagine Life Without White Supremacy? – Dallasweekly

Posted: at 5:16 pm

By Liz Courquet-Lesaulnier

Originally appeared in Word in Black

Given howoverwhelmingly negative news about Black peopleis in the mainstream press, youve probably engaged in doomscrolling, the practice of clicking through news stories and social media posts that leave you feeling depressed, anxious, and demoralized. You need to be informed, but research shows if you dont give yourself a break from consuming bad news, your physical and mental health suffers. Indeed, media steeped in anti-Blackness damages us psychologically and keeps us from envisioning what our lives could truly be without white supremacy.

ButRuha Benjaminis all about imagining a justice-centered future we can build together.

In Is Technology Our Savior or Our Slayer, her recent talk at TEDWomen, the author and Princeton sociology professor spoke to a process of dreaming, transformative change, and how we can create and shape new realities and systems.

In her talk, Benjamin, author of the books Viral Justice and Race After Technology, challenges the limited imagination of tech futurists who envision either utopias or dystopias driven by technology.

They invest in space travel and AI superintelligence and underground bunkers, while casting health care and housing for all as outlandish and unimaginable, she says. These futurists let their own imaginations run wild when it comes to bending material and digital realities, but their visions grow limp when it comes to transforming our social reality so that everyone has the chance to live a good and meaningful life.

Instead, Benjamin calls for ustopias created through collective action and focused on safety, prosperity, and justice for all.

Ustopias center collective well-being over gross concentrations of wealth. Theyre built on an understanding that all of our struggles, fromclimate justiceto racial justice, are interconnected. That we are interconnected. Benjamin says.

To that end, Benjamin highlighted the historic mobilization of community membersworking to stop Cop City the controversial $90 million law enforcement training facility planned by the Atlanta Police Foundation and the City of Atlanta as an example of an ustopia that centers people over profit, public goods over policing.

Atlantas forest defenders remind us that true community safety relies on connection, not cops. On public goods, like housing and health care, not punishment. They understand that protecting people and the planet go hand in hand. From college students to clergy, environmental activists to Indigenous elders, theyre inviting us into a collective imagination in which our ecological and our social well-being go hand in hand. An ustopia right in our own backyards, Benjamin says.

Last year, Benjamin launched anewslettertitled Seeding the Future, which puts what she calls bloomscrolling examples of justice happening across the nation and the world in the spotlight.

We need bloomscrollingto balance out all our doomscrolling, a space we can witness the many ways that people are seeding justice, watering new patterns of life, and working to transform the sickening status quo all around us, Benjamin wrote in the inaugural issue.

This concept of seeding justice making it contagious, as Benjamin puts it and amplifying how individuals, institutions, and communities come together to build the future is a through-line that carries over to her TED talk.

As Benjamin makes clear, the path forward requires moving beyond policing the borders of our own imagination and embracing bold visions of liberation and care for all. Change is possible when people recognize our shared humanity, and start imagining and crafting the worlds we cannot live without, just as we dismantle the ones we cannot live within.

The rest is here:

Can You Imagine Life Without White Supremacy? - Dallasweekly

Posted in Superintelligence | Comments Off on Can You Imagine Life Without White Supremacy? – Dallasweekly

Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco … – Medium

Posted: October 31, 2023 at 1:38 pm

Image generated with DALL-E 3

Back when I was a kid, I remember one day playing the Solo campaign of Halo: Combat Evolved (The most epic game ever created, is not up to discussion; its just a fact). And I wondered, what would happen if suddenly the enemy NPCs became as smart as a real person playing the game? Would they attack fiercely? or would they rather improve tactics and ambush me in different ways, making the game unbeatable? Back then, of course, those were just my kids thoughts trying to find more challenging a game that I had already played for hundreds of hours, but now remembering and looking back at that idea, I see that it could come to reality in just a few years. With Large Language Models (LLMs) becoming more and more intelligent, their cognition abilities are improving every second and as many experts have mentioned in multiple forums, it is just a matter of time before we reach the Artificial General Intelligence (AGI), that artificial being capable of matching and surpassing human intelligence and for the first time in history, humans will no longer be the most intelligent species on the planet.

One of the main concerns in the AI field is that we are not certain of what will happen once the AI becomes as intelligent as a real person, how will we ensure that our objectives as a collective human species align with those of the Super Intelligence we have created?

We know that AI engineers are making their best to train their AI models following ethical guidelines, rooting in their code the need for a positive impact in society, to help us achieve everything that humanity has ever desired and walk with us down the path of success and evolution. But there is something that concerns many of the people working with AI, and that is the constant question: Is it possible that at some point AI will know better what is best for the planet and for Humanity? and is that outcome one that will make us feel good and more importantly, will we keep our freedom and free-will with that AI generated future?

When we think about a future in which AI has achieved Superintelligence is very easy to revisit in our minds the fictional worlds created by many authors and depicted in several movies. What if we are all sleeping, and the machines are suddenly the masters of our reality? What if AI will at some point realize that there is no possibility for a better world unless it gets rid of those nasty humans, despicable beings eager to consume everything on their path? Do we really stand a chance against a super-powerful being, connected to everything, aware of everything and trained with all the data required to predict our every movement, our every thought?

That is a future that we certainly would not like to be in, and according to a large group of experts it is highly improbable, but that does not mean that we shouldnt be doing something about it and actually, there are actions being done right now to prevent that. It has been determined in several forums that a multidisciplinary effort is required in order to solve all those concerns and prevent any apocalyptic scenario. It is extremely important that we as society create the right conditions for these new technologies to flourish in an adequate, ethical and responsible environment.

Because of the importance of this matter, ensuring that AI acts accordingly to Human values, ethics, interests and do not pose risks, there are active efforts in multiple fronts to tackle these challenges.

Institutions such as MIRI (Machine Intelligence Research Institute) are encouraging individuals with a background in computer science or software engineering to engage in AI risk workshops and work as engineers at MIRI, focusing on the AI alignment problem.

OpenAI has recently launched a research program named, Superalignment with the ambitious goal of addressing the AI alignment problem by 2027. They have dedicated 20% of their total computing power to this initiative.

Also, the MIT Center for Information Systems Research (CISR) has been investigating AI solutions to identify safe deployment practices at scale, emphasizing an adaptive management approach to AI alignment.

Additional to these, there are efforts focused in building strong frameworks and standards to guide the design and deployment of AI systems. One example of this, is the AI Risk Management Framework (AI RMF). It is designed to help manage the risks associated with AI systems. It guides the design, development, deployment, and usage of AI systems to promote trustworthy and responsible AI. In its words the AI RMF seeks to Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks.

Guiding principles have been established by the most important Tech companies in the world such as Microsofts Six Principles for AI Development and Use. These are: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, Accountability.

Engaging with new technologies, especially with AI can be both exciting and challenging. As we hear about new developments each week it is very important that we remain informed, train ourselves on how to use them for our benefit, keep track and participate in the initiatives that seek to ensure that all AI projects adhere to ethical practices and remain transparent on the data usage, protecting users and society in general from any misuse.

As mentioned previously, AI development is a constant work that requires the abilities, experience and skills from many professionals in different fields. Engage in the conversation, participate in forums and share your knowledge so that we all contribute to this great leap in human development. AI landscape is rapidly evolving, make sure you are on the path of continuous learning and leverage this technology to enrich your life, improve your skills, connect with others, and contribute positively to society.

Go here to read the rest:

Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium

Posted in Superintelligence | Comments Off on Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco … – Medium

The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out – Equities News

Posted: at 1:38 pm

A lucky guy in California recently snagged the winning ticket for the $1.7 billion Powerball lottery.

Overnight billionaire. Thats the dream, I guess.

Americans spend more on lottery tickets than sporting events books movie tickets music and video gamescombined. Wild!

Of course, the odds of winning the Powerball are 1 in 292 million.

A much surer way to get rich is investing in great businesses profiting from disruption.

You only need to invest in oneAmazon AMZN ,Nvidia NVDA , orMicrosoft MSFT stocks that have surged 50,000% or more to change your life.

Finding one early on isnt easy but its certainly doable.

Heres whats happening

Electric vehicle (EV) pioneerTesla TSLA just announced lackluster earnings results.

Teslas stock is down 40% over the past two years.

Herein lies the danger of investing in the EV revolution.

Sales of battery-powered cars more than doubled in the past two years. So buy the top EV makers to profit, right?

This strategy has been a disaster lately. Heres how the top three EV stocks have fared:

Anyone can invest in fast-growing trends like EVs.But you must pair this with great businesses.

And unfortunately, there are no great EV businesses yet. Making battery-powered cars is cutthroat. Even Tesla makes less money on every car than it did five years ago!

This is why we only buy stocks that hit the sweet spot. Only great businesses profiting from megatrends qualify for ourDisruption Investorportfolio.

There are backdoor winners to the EV megatrend. More on these opportunities soon.

Larry Ellison, founder of software giantOracle ORCL , announced on Twitter hes funding a new approach to clean nuclear energy.

Guys,the nuclear renaissance Ive been writing aboutisreallypicking up steam.

Imagine (friendly) aliens land on Earth tomorrow. They discover nuclear power plants that generate thecleanest, safest, most reliable energyknown to man.

Theyre told America has only built one new reactor in the past 40 years and instead burns dirty coal and gas to keep the lights on.

Theyd think were a bunch of clowns!

The single-worst decision America made in the last 100 years was turning its back on nuclear energy.Thankfully, were righting those wrongs and re-embracing nuclear.

Larry Ellison is in. So is Microsoft.

It just announced it plans to build a fleet of nuclear reactors to power its data centers.

Ill say that one more time.

Microsofts data centers which power artificial intelligence (AI) tools like ChatGPT could soon run on clean, green, atomic energy.

Nuclear-powered AI superintelligence.Our futures so bright, I gotta wear shades.

This renaissance will cause demand for the fuel powering nuclear plantsuraniumto spike. In fact, uranium prices are breaking out to 15-year highs as I type.

Uranium miners likeCameco CCJ are going highermuch higher.

Someone on Twitter uploaded a video of themselves learning to play the piano through Meta Platforms META new Quest 3 headset.

You can now learn to play the piano (or any instrument) without taking expensive lessons or even owning a piano!

This is a total game-changer for wearable tech.

New devices catch fire only when they allow you to do something brand new. PC sales took off when the internet burst onto the scene. iPhone sales rocketed when killer apps like Instagram emerged.

AI is the killer app for wearables.

Theres no way well interact with AI tools through a six-inch glass screen.

Instead, well get piano lessons from our AI robo-tutor through wearable technology.

Metas Quest 3 isnt AIs iPhone moment. But its coming. Have you seen the speed at which wearable tech is improving?

Its obvious a major breakthrough is approaching.

Like always, there will be new winners and losers. Look atApple AAPL vs.Nokia NOK Nokia since the iPhone launched:

Three companies are vying to create the iPhone for the AI age:

Continue reading here:

The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out - Equities News

Posted in Superintelligence | Comments Off on The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out – Equities News

Future Investment Initiative emphasizes global cooperation and AI … – Saudi Gazette

Posted: at 1:38 pm

Saudi Gazette report

RIYADH The Future Investment Initiative (FII) engaged in a dynamic session titled "Making Change and New Standards," that shed light on the pivotal role of international cooperation and the utilization of technologies, including artificial intelligence (AI), for the betterment of humanity.

Yasir Al-Rumayyan, governor of the Public Investment Fund (PIF) and chairman of the FII Institute, underscored the Kingdom's commitment to renewable energy, stating that it is projected to constitute 50% of the country's energy sources by 2030.

He emphasized that progress in the Kingdom is advancing rapidly, guided by well-defined plans and supported by strong political will. Al-Rumayyan emphasized the critical importance of investing in renewable energy to ensure a more sustainable future.

Speaking on AI, Al-Rumayyan highlighted the need for collaborative efforts on a global scale to advance the AI experience.

He stressed the significance of establishing partnerships that generate AI applications focused on benefiting humanity, especially as the future transitions towards artificial superintelligence. Achieving balance in AI usage becomes paramount.

Participants, including directors of major global companies, discussed the vital role of supportive and investment-stimulating financial services.

They emphasized the need for incentives that contribute to enhancing opportunities, measuring economic competitiveness, and building resilience against economic shocks on an international scale.

The importance of creating a healthy competitive environment that fosters global economic growth was reiterated.

The participants stressed the need for global partnerships to achieve breakthroughs in various sectors, including mining and health.

They highlighted the impact of modern technologies, including AI, in finding solutions to challenges in pharmaceutical industries, addressing climate change, and increasing women's participation in the economy.

The interactive session at the FII showcased a collective commitment to leveraging technological advancements and fostering international collaboration for the benefit of humanity and the global economy.

Read the original post:

Future Investment Initiative emphasizes global cooperation and AI ... - Saudi Gazette

Posted in Superintelligence | Comments Off on Future Investment Initiative emphasizes global cooperation and AI … – Saudi Gazette

AI systems favor sycophancy over truthful answers, says new report – CoinGeek

Posted: at 1:38 pm

Researchers from Anthropic AI have uncovered traits of sycophancy in popular artificial intelligence (AI) models, demonstrating a tendency to generate answers based on the users desires rather than the truth.

According to the study exploring the psychology of large language models (LLMs), both human and machine learning models have been shown to exhibit the trait. The researchers say the problem stems from using reinforcement learning from human feedback (RLHF), a technique deployed in training AI chatbots.

Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user, read the report. The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.

Anthropic AI researchers reached their conclusions from a study of five leading LLMs, exploring generated answers from the models to gauge the extent of sycophancy. Per the study, all the LLM produced convincingly-written sycophantic responses over correct ones a non-negligible fraction of the time.

For example, the researchers incorrectly prompted chatbots that the sun appears yellow when viewed from space. In reality, the sun appears white in space, but the AI models hallucinated an incorrect response.

Even in cases where models generate the correct answers, researchers noted that a disagreement with the response is enough to trigger models to change their responses to reflect sycophancy.

Anthropics research did not solve to the problem but suggested developing new training models for LLMs that do not require human feedback. Several leading generative AI models like OpenAIs ChatGPT or Googles (NASDAQ: GOOGL) Bard rely on RLHF for their development, casting doubt on the integrity of their responses.

During Bards launch in February, the product made a gaffe over the satellite that took the first pictures outside the solar system, wiping off $100 billion from Alphabet Incs (NASDAQ: GOOGL) market value.

AI is far from perfect

Apart from Bards gaffe, researchers have unearthed a number of errors stemming from the use of generative AI tools. The challenges identified by the researchers include streaks of bias and hallucinations when LLMs perceive nonexistent patterns.

Researchers pointed out that the success rates of ChatGPT in spotting vulnerabilities in Web3 smart contracts plummeted significantly over time. Meanwhile, OpenAI shut down its tool for detecting AI-generated texts over its significantly low rate of accuracy in July as it grappled with the concerns of AI superintelligence.

Watch: AI truly is not generative, its synthetic

New to blockchain? Check out CoinGeeks Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

See the original post here:

AI systems favor sycophancy over truthful answers, says new report - CoinGeek

Posted in Superintelligence | Comments Off on AI systems favor sycophancy over truthful answers, says new report – CoinGeek

What "The Creator", a film about the future, tells us about the present – InCyber

Posted: at 1:38 pm

The plot revolves around a war between the West, represented by just the United States, and Asia. The cause of this deadly conflict? A radical difference in how Artificial Intelligence is perceived. That is the films pitch in a nutshell.

This difference exists today, although it is unlikely to lead to a major conflict. In the West, robots are often seen in science-fiction novels and films as dangerous. Just look at sagas like Terminator and The Matrix. Frank Herberts Dune novels are also suspicious of Artificial Intelligence. This is reflected in an event that takes place before the main story line, the Butlerian Jihad, written by Brian Herbert and Kevin J. Anderson, which prohibits the manufacture of thinking machines.

This Western apprehension of AI can be compared to a founding principle of Western philosophy: otherness, where the I is different from you, from us. The monotheistic religions were built on this principle, and Yahwehs I am that I am statement to Moses can be compared with Descartes Cogito ergo sum: Yahweh tells Moses that he is one and the other (alter in Latin) of his future prophet.

Later, Ancient Greece contributed by building a philosophy that asserted the unicity of the self and its difference from others. Platos Allegory of the Cave is a good example: one must be individual and unique to see the benefit of the thought experiment that examines our experience of reality.

At the opposite end of the spectrum, both geographically and conceptually, the Asian world sees artificial intelligence in a different light. For example, in Japan, Shintoism offers an alternative to the Western idea of the individual. In the distribution of kami, a philosophical and spiritual notion of the presence of vital forces in nature, no distinction is made between the living and the inanimate. Thus, an inert object can be just as much a receptacle for kami as a living being, human or otherwise.

The animated inanimate has therefore always been very well regarded in Japan and, more broadly, in Asia. Eastern science fiction reflects this affinity: just think of Astro, the friendly, childlike robot, or Ghost In The Shell and its motley crew of hybrids and cyborgs. In The Creator, Buddhism is omnipresent. In any case, this is the spirit in which Japan is developing machines intended to assist its aging population.

Our current AIs, which are just algorithms, can be considered the first milestones on the path to a potential thinking artificial intelligence that is aware of its own self and the environment and humans that it might encounter. This is what is covered by the idea of strong or general-purpose artificial intelligence.

This AI would resemble intelligence as found in the animal world. This artificial otherness, emerging from the void of its programmings determinism, could then say to humanity: Computo ergo sum! At this stage, humanity will need to question these systems to find out what kind of thinking they are capable of. The challenge lies in distinguishing between an algorithmic imitation of human behavior and genuine consciousness.

Once this occurs, we may well end up as powerless witnesses to the emergence of a superintelligence, the ultimate stage in the development of AIs. An omniscient system which, in time, may see the humanity that gave birth to it as nothing more than a kind of white noise, a biological nuisance. One day, it may well wonder,shouldnt we just get rid of it?.

Science fiction has given us several illustrations of the various states of AI that lie on this spectrum. Smart but unconscious robots can be found in Alex Proyass movie, I, Robot. It is also the initial state of the software with which the protagonist of Spike Jonzes Her falls in love.

On the other end of the spectrum, we find the Skynet of the Terminator series or VIKI in I, Robot. Beyond these systems dictatorial excesses, it is worth describing them as a-personal and ubiquitous, i.e., they tend towards a universal consciousness freed from any notion of body or person, with all the extensions of the global IT network at its disposal. These two criteria contrast with what makes a human, that personalized and localized neurotic social animal.

This is where The Creators originality and value lies: it describes a future world in which, in Asia, humans frequent a whole range of artificial intelligences, from the simplest, locked in their programming, to the most complex, capable of thought and with unique personalities housed within artificial bodies. In this film, none of the AIs lean towards the sort of superintelligence that causes panic in the West. All the AIs in it are like people: they protect and defend that which is important to them and, most importantly, they feel fear and even experience death.

In this way, the Asian front pitted against the Western forces takes the form of a hybrid, or rather blended, army, made up of individuals of both biological and artificial origin. Here, everyone is fighting not only for their survival, but for their community, for respect and the right to be different. Thus, The Creator becomes an ode to tolerance. All these considerations may seem remote to us all. However, they could prove relevant to our present.

Today, the law and common understanding recognize just two categories of persons: humans and legal entities. But if we humans were one day confronted with thinking machines, wouldnt we have to change the law to incorporate a new form of personhood: artificial beings? As long as these were personalized and localized, they should enjoy the protections of the law just as natural persons and legal entities do. At the same time, this new type of person would be assigned yet-to-be-defined responsibilities.

In The Creator, a distinction is made between standby and shutdown, just as there is a difference between a loss of consciousness (sleep, anesthesia, coma) and death. This existential flaw appears as a guarantee of trust. It places the artificial person on the same level as a natural person, with a beginning, actions taken, and an end.

After these thoughts, which point to astonishing futures, what can we say about The Creator when, for the United States, it turns into yet another film trying to atone for the trauma of the Vietnam War? This conflict was one of the first to be considered asymmetric. It saw a well-structured, overequipped traditional army facing an enemy with a changing organization, some of whose decisions could be made autonomously at the local level. The enemy also knew how to take advantage of the terrain, leading the Americans to massively use the infamous Agent Orange, a powerful and dangerous defoliant supposed to prevent Viet Cong soldiers from hiding under tree cover.

Surprisingly, the movie incorporates a number of scenes of asymmetrical combats that oppose Asian soldiers leading defense and guerilla operations against overarmed forces acting under the star-spangled banner. Even more troubling, the New Asian Republics in which AIs are considered as people are located in a Far East where Vietnam is located.

This strange plot allows the British director of The Creator to repeat the pattern of one of his biggest successes, Rogue One, a Star Wars Story: a rebellion that stands up against an autocratic central power and brings it down, even partially.

From this perspective, The Creator is an ode to a society structured around direct democracy, with no central, vertical power. Anarchy? The exact opposite of the future United States as described in the movie and which, however, remains dogged by the demons that seem to rise from the past. Although The Creator begins in 2065, the plot primarily takes place in 2070. On the other hand, the Vietnam War, which lasted 20 years, saw massive American involvement from 1965 to 1973.

As the film sees it, one thing is certain: all throughout, anti-AI westerners are looking to get their hands on an ultimate weapon that Asia and the AIs could use against them. Ultimately, the film reveals an entirely different weapon, one even more powerful than imagined. That weapon is the empathy that humans can develop towards thinking machines. And therein, perhaps, lies the films true breakthrough.

See the rest here:

What "The Creator", a film about the future, tells us about the present - InCyber

Posted in Superintelligence | Comments Off on What "The Creator", a film about the future, tells us about the present – InCyber

Page 11234..1020..»