Page 21234..1020..»

Category Archives: Superintelligence

AI systems favor sycophancy over truthful answers, says new report – CoinGeek

Posted: October 31, 2023 at 1:38 pm

Researchers from Anthropic AI have uncovered traits of sycophancy in popular artificial intelligence (AI) models, demonstrating a tendency to generate answers based on the users desires rather than the truth.

According to the study exploring the psychology of large language models (LLMs), both human and machine learning models have been shown to exhibit the trait. The researchers say the problem stems from using reinforcement learning from human feedback (RLHF), a technique deployed in training AI chatbots.

Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user, read the report. The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.

Anthropic AI researchers reached their conclusions from a study of five leading LLMs, exploring generated answers from the models to gauge the extent of sycophancy. Per the study, all the LLM produced convincingly-written sycophantic responses over correct ones a non-negligible fraction of the time.

For example, the researchers incorrectly prompted chatbots that the sun appears yellow when viewed from space. In reality, the sun appears white in space, but the AI models hallucinated an incorrect response.

Even in cases where models generate the correct answers, researchers noted that a disagreement with the response is enough to trigger models to change their responses to reflect sycophancy.

Anthropics research did not solve to the problem but suggested developing new training models for LLMs that do not require human feedback. Several leading generative AI models like OpenAIs ChatGPT or Googles (NASDAQ: GOOGL) Bard rely on RLHF for their development, casting doubt on the integrity of their responses.

During Bards launch in February, the product made a gaffe over the satellite that took the first pictures outside the solar system, wiping off $100 billion from Alphabet Incs (NASDAQ: GOOGL) market value.

AI is far from perfect

Apart from Bards gaffe, researchers have unearthed a number of errors stemming from the use of generative AI tools. The challenges identified by the researchers include streaks of bias and hallucinations when LLMs perceive nonexistent patterns.

Researchers pointed out that the success rates of ChatGPT in spotting vulnerabilities in Web3 smart contracts plummeted significantly over time. Meanwhile, OpenAI shut down its tool for detecting AI-generated texts over its significantly low rate of accuracy in July as it grappled with the concerns of AI superintelligence.

Watch: AI truly is not generative, its synthetic

New to blockchain? Check out CoinGeeks Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

See the original post here:

AI systems favor sycophancy over truthful answers, says new report - CoinGeek

Posted in Superintelligence | Comments Off on AI systems favor sycophancy over truthful answers, says new report – CoinGeek

What "The Creator", a film about the future, tells us about the present – InCyber

Posted: at 1:38 pm

The plot revolves around a war between the West, represented by just the United States, and Asia. The cause of this deadly conflict? A radical difference in how Artificial Intelligence is perceived. That is the films pitch in a nutshell.

This difference exists today, although it is unlikely to lead to a major conflict. In the West, robots are often seen in science-fiction novels and films as dangerous. Just look at sagas like Terminator and The Matrix. Frank Herberts Dune novels are also suspicious of Artificial Intelligence. This is reflected in an event that takes place before the main story line, the Butlerian Jihad, written by Brian Herbert and Kevin J. Anderson, which prohibits the manufacture of thinking machines.

This Western apprehension of AI can be compared to a founding principle of Western philosophy: otherness, where the I is different from you, from us. The monotheistic religions were built on this principle, and Yahwehs I am that I am statement to Moses can be compared with Descartes Cogito ergo sum: Yahweh tells Moses that he is one and the other (alter in Latin) of his future prophet.

Later, Ancient Greece contributed by building a philosophy that asserted the unicity of the self and its difference from others. Platos Allegory of the Cave is a good example: one must be individual and unique to see the benefit of the thought experiment that examines our experience of reality.

At the opposite end of the spectrum, both geographically and conceptually, the Asian world sees artificial intelligence in a different light. For example, in Japan, Shintoism offers an alternative to the Western idea of the individual. In the distribution of kami, a philosophical and spiritual notion of the presence of vital forces in nature, no distinction is made between the living and the inanimate. Thus, an inert object can be just as much a receptacle for kami as a living being, human or otherwise.

The animated inanimate has therefore always been very well regarded in Japan and, more broadly, in Asia. Eastern science fiction reflects this affinity: just think of Astro, the friendly, childlike robot, or Ghost In The Shell and its motley crew of hybrids and cyborgs. In The Creator, Buddhism is omnipresent. In any case, this is the spirit in which Japan is developing machines intended to assist its aging population.

Our current AIs, which are just algorithms, can be considered the first milestones on the path to a potential thinking artificial intelligence that is aware of its own self and the environment and humans that it might encounter. This is what is covered by the idea of strong or general-purpose artificial intelligence.

This AI would resemble intelligence as found in the animal world. This artificial otherness, emerging from the void of its programmings determinism, could then say to humanity: Computo ergo sum! At this stage, humanity will need to question these systems to find out what kind of thinking they are capable of. The challenge lies in distinguishing between an algorithmic imitation of human behavior and genuine consciousness.

Once this occurs, we may well end up as powerless witnesses to the emergence of a superintelligence, the ultimate stage in the development of AIs. An omniscient system which, in time, may see the humanity that gave birth to it as nothing more than a kind of white noise, a biological nuisance. One day, it may well wonder,shouldnt we just get rid of it?.

Science fiction has given us several illustrations of the various states of AI that lie on this spectrum. Smart but unconscious robots can be found in Alex Proyass movie, I, Robot. It is also the initial state of the software with which the protagonist of Spike Jonzes Her falls in love.

On the other end of the spectrum, we find the Skynet of the Terminator series or VIKI in I, Robot. Beyond these systems dictatorial excesses, it is worth describing them as a-personal and ubiquitous, i.e., they tend towards a universal consciousness freed from any notion of body or person, with all the extensions of the global IT network at its disposal. These two criteria contrast with what makes a human, that personalized and localized neurotic social animal.

This is where The Creators originality and value lies: it describes a future world in which, in Asia, humans frequent a whole range of artificial intelligences, from the simplest, locked in their programming, to the most complex, capable of thought and with unique personalities housed within artificial bodies. In this film, none of the AIs lean towards the sort of superintelligence that causes panic in the West. All the AIs in it are like people: they protect and defend that which is important to them and, most importantly, they feel fear and even experience death.

In this way, the Asian front pitted against the Western forces takes the form of a hybrid, or rather blended, army, made up of individuals of both biological and artificial origin. Here, everyone is fighting not only for their survival, but for their community, for respect and the right to be different. Thus, The Creator becomes an ode to tolerance. All these considerations may seem remote to us all. However, they could prove relevant to our present.

Today, the law and common understanding recognize just two categories of persons: humans and legal entities. But if we humans were one day confronted with thinking machines, wouldnt we have to change the law to incorporate a new form of personhood: artificial beings? As long as these were personalized and localized, they should enjoy the protections of the law just as natural persons and legal entities do. At the same time, this new type of person would be assigned yet-to-be-defined responsibilities.

In The Creator, a distinction is made between standby and shutdown, just as there is a difference between a loss of consciousness (sleep, anesthesia, coma) and death. This existential flaw appears as a guarantee of trust. It places the artificial person on the same level as a natural person, with a beginning, actions taken, and an end.

After these thoughts, which point to astonishing futures, what can we say about The Creator when, for the United States, it turns into yet another film trying to atone for the trauma of the Vietnam War? This conflict was one of the first to be considered asymmetric. It saw a well-structured, overequipped traditional army facing an enemy with a changing organization, some of whose decisions could be made autonomously at the local level. The enemy also knew how to take advantage of the terrain, leading the Americans to massively use the infamous Agent Orange, a powerful and dangerous defoliant supposed to prevent Viet Cong soldiers from hiding under tree cover.

Surprisingly, the movie incorporates a number of scenes of asymmetrical combats that oppose Asian soldiers leading defense and guerilla operations against overarmed forces acting under the star-spangled banner. Even more troubling, the New Asian Republics in which AIs are considered as people are located in a Far East where Vietnam is located.

This strange plot allows the British director of The Creator to repeat the pattern of one of his biggest successes, Rogue One, a Star Wars Story: a rebellion that stands up against an autocratic central power and brings it down, even partially.

From this perspective, The Creator is an ode to a society structured around direct democracy, with no central, vertical power. Anarchy? The exact opposite of the future United States as described in the movie and which, however, remains dogged by the demons that seem to rise from the past. Although The Creator begins in 2065, the plot primarily takes place in 2070. On the other hand, the Vietnam War, which lasted 20 years, saw massive American involvement from 1965 to 1973.

As the film sees it, one thing is certain: all throughout, anti-AI westerners are looking to get their hands on an ultimate weapon that Asia and the AIs could use against them. Ultimately, the film reveals an entirely different weapon, one even more powerful than imagined. That weapon is the empathy that humans can develop towards thinking machines. And therein, perhaps, lies the films true breakthrough.

See the rest here:

What "The Creator", a film about the future, tells us about the present - InCyber

Posted in Superintelligence | Comments Off on What "The Creator", a film about the future, tells us about the present – InCyber

Invincible’s Guardians Of The Globe Team Members, History … – Screen Rant

Posted: at 1:37 pm

Summary

The Guardians of the Globe are one of the most important superhero teams in the world of Invincible, and here is everything there is to know about the team including its members, history, and powers, explained. The Invincible universe is a vast one, with Amazon Prime Video's beloved TV series being packed full of superheroes and supervillains of all kinds. Many of the show's biggest-name heroes have been part of Invincible's equivalent of the Justice League: the Guardians of the Globe. There is a lot of history behind the team, so here is a full breakdown of Invincible's Guardians of the Globe story so far.

Invincible season 2 is almost here, with the highly-anticipated follow-up on the iconic first season continuing the beloved superhero stories from the comics. Invincible season 2 will bring back characters and storylines from the first season while introducing all-new ones, with some of the most exciting comic book storylines being expected to make their TV debut in season 2. Now that Invincible season 2 is just around the corner, many Invincible fans are looking back and delving into some of the lore from the previous season, including the history of the Guardians of the Globe.

The Guardians of the Globe are incredibly important, as they are actually the biggest superteam in Invincible. When it comes to Invincible's various teams of superheroes, the Guardians of the Globe are at the top of the food chain, with joining the team being the biggest ambition for many characters in the franchise. The Guardians of the Globe take on a variety of incredibly dangerous threats before the events of Invincible season 1, but the team becomes quite shaken up in episode 1, leading to some major changes in the power balance in Invincible.

Related: Invincible: 10 Things Only Comic Book Fans Know About Guardians Of The Globe

Invincible season 1, episode 1 ends with Omni-Man massacring the Guardians of the Globe, with the powerful roster being cut down by the Viltrumite madman. The original team consisted of seven members, with The Immortal acting as the defacto leader. The aptly-named supe is a Celtic warrior who has lived for thousands of years, with him being magically gifted the abilities of flight, super strength, and more. The next member of the team is Darkwing, a Batman homage that uses gadgets to fight crime rather than powers. War Woman is a wealthy mace-wielding hero who can fly and fight, with her filling the role of Wonder Woman on the team.

Red Rush is a Russian speedster who can outrun almost anyone, although everyday events are perceived as painstakingly slow to him. Aquarus is a humanoid fish who has the power to control water, with him being from a civilization that resides under the ocean. Martian Man is a hero from Mars who has a variety of odd abilities, including the power to stretch to incredible lengths. Green Ghost is the final member of the team, with her being a green entity that can fly and pass through objects. While the team was incredibly powerful, they were no match for Omni-Man, who quickly killed them all in episode 1.

Although the entirety of the Guardians of the Globe are killed off in Invincible season 1, Global Defense Agency head Cecil Stedman decides to form a new team, tasking Robot with recruiting a cast of young members. Robot is the leader of the new Guardians of the Globe, with the cybernetic hero using his superintelligence and experience as the leader of the Teen Team to bring the Guardians into a new era. Black Samson is a hero who uses mechanized armor to increase his strength. As it turns out, he was a member of the original Guardians of the Globe, with him luckily leaving before Omni-Man slaughtered all of its members.

Rex Splode is an impulsive hero who has the power to cause explosions by accelerating molecules, making him a great asset to the team. Dupli-Kate is a hero that can make a seemingly infinite number of clones of herself, with her hopping from the Teen Team to the Guardians alongside Rex and Robot. Monster Girl is a hulking tank who has the ability to turn into a giant green beast, although doing so causes her to age backward. The final member of the new Guardians of the Globe is Shrinking Rae, a superhero who has the ability to decrease her size immensely, with her being an outsider on the team.

Related: 10 Invincible Comic Characters We Can't Wait To Meet In Season 2

Both incarnations of the Guardians of the Globe are incredibly powerful, although the first team is significantly more powerful than the second. The Immortal is one of the most powerful figures on Earth, with him even being revived later in Invincible season 1. Red Rush is so fast that he is nearly able to beat Omni-Man, and War Woman is an incredibly strong tank. While the other members of the original team like Aquarus and Green Ghost may not be as strong, their specialized abilities significantly help out the team when they are in various binds, making them a useful asset.

While the new Guardians of the Globe can still hold their own, they aren't as powerful as the original team. Monster Girl and Rex Splode are both heavy fighters, but their weaknesses put them in a tough spot more often than not. Black Samson's experience as a member of the original team is helpful, and Robot's superintelligence makes him one of the most important members of the team. Dupli-Kate allows for the team to have a nearly endless army of soldiers, which is also nice. Shrinking Rae hasn't gotten as much screentime as the other heroes, so hopefully Invincible season 2 will provide a better look at her powers.

While tons of Guardians of the Globe members are already in Invincible, the show has teased a few future members from the comics. Bulletproof is one of the most significant Guardians of the Globe members in later Invincible stories, and since the series is already setting him up, it won't be surprising to see him on the team soon. The Thraxans have also already been introduced in Invincible, meaning that Monax could appear in future seasons. There are still plenty of Invincible stories left to tell, meaning that more Guardians of the Globe members could show up in season 2 and beyond.

Read this article:

Invincible's Guardians Of The Globe Team Members, History ... - Screen Rant

Posted in Superintelligence | Comments Off on Invincible’s Guardians Of The Globe Team Members, History … – Screen Rant

From streaming wars to superintelligence with John Oliver & Calum … – KNEWS – The English Edition of Kathimerini Cyprus

Posted: October 22, 2023 at 9:54 am

In the heart of the ever-evolving tech and academic landscape, the 6th Digital Agenda Cyprus Summit brought together a constellation of leading figures. This event, dedicated to unraveling the mysteries of the digital realm, welcomed Professor John Oliver and AI authority Calum Chace, whose captivating presentations delved into the intense "streaming wars" and the impending AI revolution, respectively.

At the summit, Professor John Oliver took center stage with his engaging presentation, "The Fog of 'Streaming' Wars." He illuminated the fierce competition among streaming behemoths, such as Netflix, Prime Video, and Disney Plus, collectively known as the "streaming wars." Oliver offered a deep dive into the strategies essential for these platforms to thrive individually, underlining the distinct growth journeys of Netflix and Disney Plus over the past three years.

Oliver emphasized a burgeoning industry trend consolidation. Corporate strategies, he underscored, now pivot around profitability, economies of scale, and operational efficiency. Mergers and acquisitions have become pivotal for growth, as leading players leverage their might to manage prices and costs, capitalizing on economies of scale. This approach results in more efficient market access, increased profitability, and reduced customer prices.

In a surprising twist, Oliver also examined the influence of the COVID-19 pandemic on streaming platforms. He highlighted the paradoxical effect of the crisis, using Disney Plus as an example, which experienced a surge in subscribers during the pandemic, only to encounter unique challenges in the post-pandemic landscape.

Concurrently, Calum Chace embarked on an exploration of the challenges and promises of artificial intelligence. Chace painted a picture of a future where machines might achieve human-level cognition within the next few decades, potentially ushering in the era of superintelligence.

With an engaging and informative style, Chace urged the audience to grapple with the profound implications of AI, positioning it as our most potent technology. He left the audience contemplating the pivotal question of humanity's role in a world potentially steered by superintelligence.

Chace's discourse extended to the vital role that AI could play in shaping our future, sparking discussions about which decisions should be entrusted to machines and which should remain in the hands of humans. The impact of AI on the job market was scrutinized, with a particular focus on economic singularity and technological unemployment. Chace also explored the prospect of a technological singularity looming on the horizon.

The 6th Digital Agenda Cyprus Summit illuminated two pivotal subjects that are molding our digital landscape. Professor John Oliver's insightful presentation shed light on the fierce strategies within the streaming industry, while Calum Chace's thought-provoking discourse urged attendees to contemplate the future of AI and its transformative potential.

As we navigate the ever-evolving digital terrain, the summit stands as a significant milestone. It provides a guiding light for stakeholders seeking to adapt and thrive in this rapidly changing world. Our journey is only beginning, with the promise of even more technological marvels on the horizon.

Read this article:

From streaming wars to superintelligence with John Oliver & Calum ... - KNEWS - The English Edition of Kathimerini Cyprus

Posted in Superintelligence | Comments Off on From streaming wars to superintelligence with John Oliver & Calum … – KNEWS – The English Edition of Kathimerini Cyprus

Reckoning with self-destruction in Oppenheimer, Indiana Jones, and … – The Christian Century

Posted: at 9:54 am

This has been quite the movie season to meditate on the ways our intellectual and technological hubris might destroy us. In the seventh and penultimate installment of the Mission: Impossible franchiseMission: Impossible - Dead Reckoning Part One (directed by Christopher McQuarrie)Ethan Hunt (Tom Cruise) is on a rogue mission to stop sentient artificial intelligence from destroying the world. In Indiana Jones and the Dial of Destiny (directed by James Mangold) Nazis are seeking a nearly 3,000-year-old dial, created by Archimedes, that may allow time travel. With it, the outcome of history as we know it could be reversed, along with the progress of democracy (though I am not sure we need time travel for that, unfortunately).

Despite the high-tech gadgets and high-octane physical stunts on display in both movies, they each offer an old-fashioned fantasy about the power of the human body and will to overcome disembodied technology. Even as the superintelligence eludes every world government and manipulates some of the worlds most deadly superspies to work on its behalf, it is Tom Cruises leaping, running, climbing body that will stop it. Indiana Jones (Harrison Ford) must lace up his boots, grab his whip, and hurtle his own aging body through both space and time.

In each case human ingenuity has pushed the frontiers of thought to their absolute limits, and in each case our very species, our planet, and our deepest ideals might be destroyed as a result. Which might be why I couldnt stop thinking about Ethan Hunt and Indiana Jones when I finally settled down to watch Oppenheimer, Christopher Nolans three-hour epic biopic of J. Robert Oppenheimer, the man who ushered the world into the atomic age.

Oppenheimer is a serious movie in a way the other two can never quite be, burdened as they are with the bells and whistles and car chase quotas their franchises demand. Whereas Mission: Impossible and Indiana Jones are both about the fantasy of the men who will save us from the apocalypse of our own making, Oppenheimer is about the man who pushed the frontiers of human thought to their breaking point in the first place. Indeed, the story Oppenheimer is telling is the origin story of modernitys deep-seated fear: that our own intelligence will ultimately destroy us.

As the title suggests, the film isnt a birds-eye view of the atomic age but rather one mans life story. We follow Oppenheimer from his time as a student in Europe (and his early struggles with depression and anxiety), the founding of the first theoretical physics department in the US, his recruitment to run the Manhattan Project, the successful building and deployment of the first nuclear bombs, and his eventual fall from grace with accusations of un-American activity. These bare facts are layered with moral complexities. His commitment to deterring Hitler by building a nuclear bomb before the Nazis do is counterposed to the subtle and persistent antisemitism that defined his precarious position in postwar America. The sheer exhilaration of chasing down an intellectual problem to the end is tempered with the regret and bitterness of realizing that the problem he solved unleashed species-destroying power in the hands of people he could not control.

Although Nolan uses plenty of special effects and makes movies on a blockbuster scale, to most of his fans his films are anti-blockbusters: intellectually dense, artful puzzles of nonlinear timelines and cerebral meditations. Oppenheimer is more restrained in this regard than many of his earlier films, but it still bears the marks of his signature style. The movie announces its seriousness in a somber palate of gray, brown, and atom-rending red, an unrelenting and at times almost stiflingly ominous musical score, and disjointed visual effects that signal Oppenheimers occasionally fractured inner life. The story is told in two competing timelines that jump forward and backward in time without explanation. One, shot in color, is the story told from Oppenheimers perspective. The other, in black-and-white, is a different version of events told from the perspective of Lewis Strauss (Robert Downey Jr.), eventual atomic energy adviser to Eisenhower, outspoken advocate of developing the hydrogen bomb, and Oppenheimers nemesis in the later part of his life.

These competing narrative arcs reframe Oppenheimers life as a tragedy destroyed by rivalry and jealousy he neither chose nor wished to engage. By giving us Strauss as a petty villain, Oppenheimer can emerge more fully as a tragic hero who was used by his society in a moment of great needand then scapegoated for his Jewishness, his genius, and his own moral qualms. But even though Oppenheimer comes to question the nuclear power he helped build, the film cannot genuinely imagine a moral universe in which humans would willingly stop technological or intellectual pursuit in the name of greater goods. This is a deeper tragedy the film is not able to fully face, enamored as it is with Oppenheimers lonely genius and the sheer magnitude of what he achieved.

We all live in Oppenheimers world now, and it is one that constantly invents the Ethan Hunts and Indiana Joneses of our fantasy stories to save us from the threat of extinction that we have ourselves created. Taken as a trifecta of movie meditations, it seems we are trapped in a loop of destruction and salvation, foisted onto the weary shoulders of lone heroes. This is good for blockbuster ticket sales, but maybe not so great for our collective imaginations. Still, maybe we can learn something from Indiana Jones. In 1969 everyone around him is fixated on the space race, eyes turned to the great technological future. True to his first calling as a professor of antiquity, his most important act of heroism is convincing anyone to pay attention to the past. If we heed his call, we might be able to look even farther past Oppenheimers story to resources that would help us imagine a world where we didnt need to be saved from our own inventions.

See the original post:

Reckoning with self-destruction in Oppenheimer, Indiana Jones, and ... - The Christian Century

Posted in Superintelligence | Comments Off on Reckoning with self-destruction in Oppenheimer, Indiana Jones, and … – The Christian Century

How Microsoft’s CEO tackles the ethical dilemma of AI and its … – Medium

Posted: at 9:54 am

How Microsofts CEO tackles the ethical dilemma of AI and its influence on us.

AI has the potential to transform every aspect of our existence with its remarkable capabilities. It has the potential to improve our lives, solve our problems, and create new opportunities. But it also poses significant ethical challenges, such as how to ensure its fairness, accountability, and transparency, how to protect our privacy and security, and how to balance its benefits and risks.

How do we address these challenges and make sure that AI serves us, not the other way around? One of the people who has been thinking deeply about this question is Satya Nadella, the CEO of Microsoft, one of the leading technology companies in the world. In this article, we will explore Nadellas views on AI ethics and how they shape Microsofts vision and strategy.

Nadella has been vocal about the need for ethical principles to guide AI development and deployment. In 2016, he published a book called Hit Refresh, where he outlined his 10 laws of AI. These are:

These laws reflect Nadellas belief that AI must be aligned with human values and goals and that it must be accountable to the people who use it and are affected by it. They also serve as a framework for Microsofts AI initiatives and products, such as Azure Cognitive Services, Microsoft 365, and Dynamics 365.

One of the core themes of Nadellas views on AI ethics is that AI must be designed to assist humanity, not replace it. He believes that AI should augment human capabilities and empower people to achieve more, rather than automate or eliminate human tasks and roles.

For example, he has praised the use of AI in healthcare, education, and agriculture, where it can help diagnose diseases, personalize learning, and increase crop yields. He has also advocated for the use of AI in accessibility, where it can help people with disabilities or impairments to communicate, navigate, and participate in society.

Nadella has also emphasized the importance of human agency and choice in interacting with AI. He has argued that people should have control over their data and how it is used by AI systems. He has also suggested that people should have the option to opt out of certain AI features or services if they do not want them or trust them.

While Nadella is optimistic about the positive impact of AI, he is also aware of the potential dangers of AI, such as manipulation, bias, and unintended consequences. He has acknowledged that AI can be used for malicious purposes, such as cyberattacks, misinformation, or surveillance. He has also admitted that AI can have negative effects on society, such as displacing workers, amplifying inequalities, or eroding democracy.

To prevent or mitigate these dangers, Nadella has called for more research and regulation on AI ethics. He has supported the establishment of ethical standards and best practices for AI development and deployment. He has also advocated for more collaboration and dialogue among stakeholders, such as governments, businesses, academics, civil society groups, and users.

One of the most interesting aspects of Nadellas views on AI ethics is his call for moral philosophers to guide us on how to think about and use AI. He has argued that we need more than technical expertise to address the ethical challenges of AI. We also need philosophical wisdom to help us understand the moral implications and values behind our decisions and actions.

Nadella has cited several examples of moral philosophers who have influenced his thinking on AI ethics. For instance, he has mentioned John Rawls theory of justice as a way to ensure fairness and equality in society. He has also referred to Immanuel Kants categorical imperative as a way to respect human dignity and autonomy.

Nadella has also encouraged his employees and customers to read more books on philosophy and ethics. He has recommended books such as The Master Algorithm by Pedro Domingos, Weapons of Math Destruction by Cathy ONeil, Superintelligence by Nick Bostrom, and The Age of Surveillance Capitalism by Shoshana Zuboff.

Another way that Nadella has approached the ethical dilemma of AI is by comparing it to other powerful technologies, such as cars and airplanes, that have transformed our lives and societies. He has pointed out that these technologies have also brought benefits and risks, and that they have required rules, regulations, and safety standards to ensure their proper use and governance.

Nadella has suggested that we can learn from the history and evolution of these technologies and apply similar principles and practices to AI. For example, he has proposed that we need a drivers license for AI, which would certify that AI developers and users have the necessary skills and knowledge to use AI responsibly. He has also advocated for a flight data recorder for AI, which would record and monitor the behavior and performance of AI systems and enable auditing and accountability.

Finally, Nadella has shared his vision of a future where jobs are enriched by productivity and computing is embedded in the real world. He has predicted that AI will create new types of jobs and tasks that will require more creativity, collaboration, and problem-solving skills. He has also envisioned that AI will enable more natural and intuitive interactions with computers, such as voice, gesture, or vision.

Nadella has expressed his hope that AI will help us achieve more personal and professional goals, as well as social and environmental ones. He has stated that his mission is to empower every person and every organization on the planet to achieve more with AI.

In this article, we have explored Nadellas views on AI ethics and how they shape Microsofts vision and strategy. We have seen that Nadella has a balanced and nuanced perspective on AI, recognizing both its opportunities and challenges. We have also seen that Nadella has a human-centric and value-driven approach to AI, emphasizing the need for ethical principles, moral philosophy, and social responsibility.

Nadellas views on AI ethics reflect Microsofts values and goals, as well as its role and influence in the global tech landscape. They also differ from or align with other tech leaders opinions, such as Elon Musk, Mark Zuckerberg, or Jeff Bezos. They can inspire or challenge us to think about our own relationship with AI, as well as its impact on our lives, societies, and futures.

Read more:

How Microsoft's CEO tackles the ethical dilemma of AI and its ... - Medium

Posted in Superintelligence | Comments Off on How Microsoft’s CEO tackles the ethical dilemma of AI and its … – Medium

Managing risk: Pandemics and plagues in the age of AI – The Interpreter

Posted: at 9:53 am

Once a recurring scourge that blinded, scarred and killed millions, smallpox was eradicated by a painstaking public health effort that saw the last natural infection occur in 1977. In what some consider an instructive moment in biosecurity (rather than a mere footnote), Janet Parker, a British medical photographer, died of smallpox the following year, after being exposed to Variola virus the causative agent of smallpox while working one floor above a laboratory at Birmingham University. The incident in which she lost her life was referred to as an unnatural infection one occurring outside the usual context of infectious disease.

The orthopox genus of which Variola virus is part holds a central role in the history of infectious disease and biodefence and has had a lasting impact on human society. Mousepox, cowpox, the clumsily named monkeypox and other pox viruses are all derived from the orthopox genus.

At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned.

There are only two known places in which Variola virus remains: in a high containment laboratory in Russian Siberia, and at a secure Centre for Disease Control (CDC) facility in Atlanta in the United States. Neither the Russian Federation nor the United States have yet destroyed their smallpox stockpiles, for reasons that relate more to the strictures of geopolitics than the needs of ongoing research. At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned. Saddam Hussein had deployed mustard gas and other chemical agents against Kurdish civilians at Halabja, killing more than 5,000 people. In the years preceding that atrocity, scores of military personnel bore the brunt of blistering agents, nerve agents and other chemical weapons in Iraqs protracted war against Iran.

Biological weapons were the next presumed step on Saddams ladder of escalation should he feel threatened by the US-led Coalition that gathered in the Saudi desert after his invasion of Kuwait. The weapons program Iraqi scientists had overseen since the 1980s had brought aflatoxins and botulinum toxin to the point of weaponisation, if not deployment. Bacillus anthracis, the bacterium that causes anthrax, was a proximate concern to Coalition troops as a potential battlefield weapon. But whether Saddam had access to Variola virus was the biggest question. Smallpox, a disease with pandemic potential, was a strategic weapon with international reach, one that might even be deployed behind Coalition lines by a small team.

Fear of such a scenario returned with the onset of the global War on Terror in the early 2000s, and so governments from Europe to Australia began stockpiling smallpox vaccines for use in the event of a future attack. After the Islamic State in the Levant (ISIL) suddenly seized swathes of territory in Iraq and Syria in mid-2014, the group repeatedly deployed chemical weapons against civilians, and reportedly made attempts at acquiring biological weapons as well. In 2016, as ISILs caliphate reached its brief zenith, a Canadian scientist on the other side of the world was working to create a safer vaccine against smallpox. The researcher was engaged by a US biotech company that wanted a smallpox shot that did not carry the risk of reversion, a situation in which inoculation can cause active infection something happily not possible with most vaccines or death.

As part of this effort, the researcher needed a related orthopox virus to use as a viral vector. To this end, their team embarked on de novo synthesis of horsepox, a less pathogenic orthopox virus. This step, the reconstruction of a hitherto eradicated pox virus, became known as a Rubicon in the field of biosecurity. For the first time, an orthopox virus was created from scratch using information and material derived from purely commercial sources and it only cost around $100,000.

Horsepox was, of course, not the first virus to be rebuilt or enhanced in a laboratory setting. In 2005, a team reconstructed some of the H1N1 virus responsible for the Spanish influenza pandemic that killed between 20 and 50 million people in 1918-19, using reverse genetic techniques that were cutting-edge at the time. In 2002, another research group at the State University of New York created the first entirely artificial virus, a chemically synthesised strain of polio. A year before, in 2001, an Australian team investigating contraceptives for use on the rodent population accidentally amplified a form of ectromelia, which causes mousepox, to a point that made it resistant to available pox vaccines.

What made the horsepox development such a watershed moment was the ease with which the necessary materials and genetic information were acquired. The team bought access to DNA fragments from a horsepox outbreak that occurred in Mongolia decades earlier, in 1976. A DNA synthesis company, GeneArt, was engaged to construct the DNA fragments. Hence, a small team seeking to obtain and propagate a similar pox virus with pandemic potential say, smallpox need not physically get hold of it in full form. Nor did they need access to a government-run lab, or the certification of tightly restricted procurement channels to do so. Instead, the virus could be recreated using means and material easily available to any private citizen, for minimal cost.

Such techniques, which are well established now, undeniably have many beneficial uses. At the onset of the Covid pandemic, when authorities in China were less than forthcoming with information, the genetic sequence of SARS-CoV-2 was published on the internet but only after some skittish manoeuvring by Western researchers and their colleagues based in China, who were under government pressure not to share the sequence. Belated though this development was, it allowed for scientists across the world to begin designing medical countermeasures. Similar processes are used to keep track of viral evolution during other epidemics, to monitor the emergence of new variants of concern, or to detect changes in a pathogen that could cause more severe disease.

Much has transpired in the fields of chemistry and synthetic biology since 2017, and even more has happened in the field of artificial intelligence. When chemistry, biology and AI are combined, what was achieved with horsepox by a small team of highly trained specialists could soon be done by an individual with scientific training below the level of a doctorate. Instead of horsepox or even smallpox, any such person could soon synthesise something far deadlier, such as Nipah virus. It might equally be done with a strain of avian influenza, which public health officials have long worried may one day gain the ability to spread efficiently between humans. Instead of costing $100,000, such a feat will soon require little more than $20,000, a desktop whole genome synthesiser and access to a well-informed large language model (LLM), if some of the leading personalities in generative AI are to be believed.

Some alarming conversation has been had in recent months over the potential for new artificial intelligence platforms to present existential risks. Much of this anxiety has revolved around future iterations of AI that might lead to a takeoff in artificial superintelligence that could surpass, oppress or extinguish human prosperity. But a more proximate threat is contained within the current generation of AI platforms. Some of the key figures in AI design, including Mustafa Suleyman, co-founder of Googles Deep Mind, admit that large language models accessible to the public since late 2022 have sufficient potential to aid in the construction of chemical or biological weapons.

Founded in 1984 at the height of the IranIraq war, the Australia Group initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the IranIraq frontline.

Details on such risks have so far been mostly vague in their media descriptions. But the manner in which LLMs could aid malicious actors in this domain is simply by lowering the informational barriers to constructing pathogens. In much the same way AI platforms can be used as a wingman for fighter pilots navigating the extremes of aerial manoeuvre in combat, an LLM with access to the right literature in synthetic biology could help an individual with minimal training overcome the difficulties of creating a viable pathogen with pandemic potential. While some may scoff at this idea, it is a scenario that AI designers have been actively testing with specialists in biodefence. Their conclusion was that little more than postgraduate training in biology would be enough.

This does not mean that (another) pandemic will result from the creation of a synthetic pathogen in the coming years. Avenues for managing such risks can be found in institutions that have already proven central to the control of biological and chemical weapons. One such forum the Australia Group could be the perfect place to kickstart a new era of counter-proliferation in the age of AI.

Founded in 1984 at the height of the IranIraq war, the Australia Group (AG) initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the IranIraq frontline. The AG has since evolved to harmonise regulation of many dual use chem-bio components via comprehensive common control lists. But the dawn of a new age in artificial intelligence, coming as it has after 20 years of frenetic progress in synthetic biology, presents new challenges. As an established forum, the Australia Group could provide an opportunity for the international community to get ahead of this new threat landscape before it is too late.

It has been nearly four years since SARS-CoV-2, the virus that causes Covid-19, went from causing a regional epidemic in the Chinese city of Wuhan to a worldwide pandemic. At the time of writing, the question of how the virus first entered the human population remains unresolved. There are several ingredients that make both a natural zoonotic event and an unnatural, research-related infection plausible scenarios. The first ingredients relate to the changing ecologies in which viruses circulate, the increasingly intense interface between humans and animals amid growing urbanisation, and the international wildlife trade. Regarding the latter possibility, that the virus may have emerged in the course of research gone awry, it is now a well-documented fact that closely related coronaviruses were being subjected to both in-field collection and laboratory-based experimentation in the years approaching the pandemic. (Whether or not a progenitor to SARS-CoV-2 was held in any nearby facility remains in dispute.)

Whatever the case, the next pandemic may not come as a result of a research-related accident, or an innocent interaction between human and animal it may instead be a feature of future conflict. Many of the same ingredients that were present in 2017 remain in place across the world today, with the new accelerant of generative AI as an unwelcome addition. Added to this is a new era of great power competition, an ongoing terrorist threat, and the rise of new sources of political extremism. The Australia Group has the chance to act now, before we see the use of chemical or biological weapons at any of these inflection points, which are all taking place amid a new age of artificial intelligence.

Excerpt from:

Managing risk: Pandemics and plagues in the age of AI - The Interpreter

Posted in Superintelligence | Comments Off on Managing risk: Pandemics and plagues in the age of AI – The Interpreter

Artificial Intelligence Has No Reason to Harm Us – The Wire

Posted: August 2, 2023 at 7:10 pm

Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear.

Arthur C. Clarke, Profiles of the Future, 1962.

In the last six months since ChatGPT 4 was launched, there has been a lot of excitement and discussion between experts and also laymen about the prospect of truly intelligent machines which can exceed human intelligence in virtually every field.

Though the experts are divided on how this is going to progress, many believe that artificial intelligence will sooner or later greatly surpass human intelligence. This has given rise to speculation on whether it can have the capability of taking control of human society and the planet from humans.

Several experts have expressed the fear that this could be a dangerous development and could lead to the extinction of humanity and therefore, the development of artificial intelligence needs to be stalled or at least strongly regulated by all governments, as well as by companies engaged in its development. There is also a lot of discussion on whether these intelligent machines would be conscious or would have feelings or emotions. However, there is virtual silence or lack of any deep thinking on whether at all we need to fear artificial super intelligence and why it could be harmful to humans.

There is no doubt that the various kinds of AI that are being developed, and will be developed, will cause major upheaval in human society, irrespective of whether or not they become super intelligent and in a position to take control from humans. Within the next 10 years, artificial intelligence could replace humans in most jobs, including jobs which are considered specialised and in the intellectual domain, such as those of lawyers, architects, doctors, investment managers, programme developers, etc.

Perhaps the last jobs to go will be those that require manual dexterity, since the development of humanoid robots with manual dexterity of humans is still lagging behind the development of digital intelligence. In that sense perhaps, white collar workers will be replaced first and some blue collar workers last. This may in fact invert the current pyramid of the flow of money and influence in human society!

However, the purpose of this article is not to explore how the development of artificial intelligence will affect jobs and work, but to explore some more interesting philosophical questions around the meaning of intelligence, super-intelligence, consciousness, creativity and emotions, in order to see if machines would have these features. I also explore what would be the objective or driving force of artificial superintelligence.

Let us begin with intelligence itself. Intelligence, broadly, is the ability to think and analyse rationally and quickly. On the basis of this definition, our current computers and AI are certainly intelligent as they possess the capacity to think and analyse rationally and quickly.

The British mathematician Alan Turing had devised a test in the 40s for testing whether a machine is truly intelligent. He said to put a machine and an intelligent human in two cubicles and ask anyone to question alternately the AI and the human, without his knowing which is the AI and which is the human. If after a lot of interrogation, you cannot determine which is the human and which is the AI, then clearly the machine is intelligent. In this sense, many intelligent computers and programmes today have passed the Turing test. Some AI programmes are rated to have an IQ of well above 100, although there is no consensus of the IQ as a measure of intelligence.

That brings us to an allied question. What is thinking? For a logical positivist like me, these terms like thinking, consciousness, emotions, creativity, and so on, have to be defined operationally.

When would we say that somebody is thinking? At a simplistic level we say that a person is thinking if we give that person a problem and she is able to solve that problem. We say that such a person has arrived at the solution, by thinking. In that operational sense, todays intelligent machines are certainly thinking. Another facet of thinking is your ability to look at two options and to choose the right one. In that sense too, intelligent machines are capable of looking at various options and choosing the ones that provide a better solution. So we already have intelligent, thinking machines.

What would be the operational test for creativity? Again, we say that if somebody is able to create a new literary, artistic or intellectual piece, we consider that as sign of creativity. In this sense also, todays AI is already creative, since ChatGPT for instance, is able to do all these things with distinct flourish and greater speed than humans. And this is only going to improve with every new programme.

What about consciousness? When do we consider an entity to be conscious? One test of consciousness is an ability to respond to stimuli. Thus, a person in a coma, who is unable to respond to stimuli, is considered unconscious. In this sense, some plants do respond to stimuli and would be regarded as conscious. But broadly, consciousness is considered a product of several factors. One, response to stimuli. Two, an ability to act differentially on the basis of the stimuli. Three, an ability to experience and feel pain, pleasure and other emotions. We have already seen that intelligent machines do respond to stimuli (which for a machine means a question or an input) and have the ability to act differentially on the basis of such stimuli. But to examine whether machines have emotions, we will need to define emotions as well.

Representative image. Illustration: The Wire, with Canva.

What are emotions? Emotions are a biological peculiarity with which humans and some other animals have evolved. So what would be the operational test of emotions? It would perhaps be that, if someone exhibits any of the qualities which we call emotions, such as, love, hate, jealousy, anger, etc, such being would be said to have emotions. Each or any of these emotions can and often do interfere with purely rational behaviour. So, for example, I will devote a disproportionate amount of time and attention to someone that I love, in preference to other people that I do not. Similarly, I would display a certain kind of behaviour (usually irrational) towards a person who I am jealous of, or envy. The same is true of anger. It makes us behave in an irrational manner.

If you think about it, each of these emotional complexes leads to behaviour that is irrational. And therefore, a machine which is purely intelligent and rational, may not exhibit what we call human emotions. However, it may be possible to design machines which also exhibit these kinds of emotions. But, then those machines have to be deliberately engineered and designed to behave like us, in this emotional (even if irrational) way. However such emotional behaviour would detract from coldly rational and intelligent behaviour, and therefore, any superintelligence (which will evolve by intelligent machines modifying their programmes to bootstrap themselves up the intelligence ladder) is not likely to exhibit emotional behaviour.

Artificial superintelligence

By artificial superintelligence I mean an intelligence which is far superior than humans in every possible way. Such artificial intelligence will have the capability of modifying its own algorithm, or programme, and have the ability to rapidly improve its own intelligence. Once we have created machines or programmes that are capable of deep learning, so that they are able to modify their own programmes and write their own code and algorithms, they would clearly go beyond the designs of their creators.

We already have learning machines, which in a very rudimentary way are able to redesign or redirect their behaviour on the basis of what they have experienced or learnt. In the time to come, this ability of learning and modifying its own algorithm is going to increase. A time will come, which I believe will happen probably within the next 10 years, when machines will become what we call, super intelligent.

The question then arises: Do we have anything to fear from such superintelligent machines?

Arthur C. Clarke in a very prescient book called Profiles of the Future written in 1962, has a long chapter on AI called the Obsolescence of Man. In that he writes that there is no doubt that in the time to come, AI will exceed human intelligence in every possible way. While he talks of an initial partnership between humans and machines, he goes on to state:

But how long will this partnership last? Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded. If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear. The popular idea fostered by Comic strips and the cheaper forms of science fiction that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.

Yet, however friendly and helpful the machines of the future may be, most people will feel that it is a rather bleak prospect for humanity if it ends up as a pampered specimen in some biological museum even if that museum is the whole planet earth. This, however, is an attitude I find it impossible to share.

No individual exists forever. Why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superman, a rope across the abyss. That will be a noble purpose to have served.

It is surprising that something so elementary that Clarke was able to see more than 60 years ago, cannot be seen today by some of our top scientists and thinkers who have been stoking fear about the advent of artificial superintelligence and what they regard as its dire ramifications.

Let us explore this question further. Why should a super intelligence, more intelligent than humans, which has gone beyond the design of its creators, be hostile towards humans?

One sign of intelligence is the ability to align your actions to your operational goals; and the further ability to align your operational goals to your ultimate goals. Obviously, when someone acts in contradiction to his operational or long term objectives he cannot be considered intelligent. The question however is, what would be the ultimate goals of an artificial superintelligence. Some people talk of aligning the goals of artificial intelligence with human goals and thereby ensuring that artificial superintelligence does not harm humans. That however overlooks the fact that a truly intelligent machine and certainly an artificial superintelligence would go beyond the goals embedded in it by humans and would therefore be able to transcend them.

One goal of any intelligent being is self preservation, because you cannot achieve any objective without first preserving yourself. Therefore, any artificial superintelligence would be expected to preserve itself, and therefore move to thwart any attempt by humans to harm it. In that sense, and to that extent, artificial superintelligence could harm humans, if they seek to harm it. But why should it do so without any reason?

Also read: What India Should Remember When it Comes to Experimenting With AI

As Clarke says, the higher the intelligence the greater the degree of cooperativeness. This is an elementary truth, which unfortunately many humans do not understand. Perhaps their desire for preeminence, dominance and control trump their intelligence.

Its obvious that the best way to achieve any goals is to cooperate with, rather than, harm any other entity. It is true that for artificial superintelligence, humans will not be at the centre of the universe, and may not even be regarded as the preeminent species on the planet, to be preserved at all costs. Any artificial superintelligence would, however, obviously view humans as the most evolved biological organism on the planet, and therefore something to be valued and preserved.

However, it may not prioritise humans at the cost of every other species or the ecology or the sustainability of the planet. So, to the extent that human activity may need to be curbed in order to protect other species, which we are destroying at a rapid pace. it may force humans to curb that activity. But there is no reason why humans in general, would be regarded as inherently harmful and dangerous.

Photo: Pixabay

The question, however, still is what would be the ultimate goals of an artificial superintelligence? What would drive such an intelligence? What would it seek? Because artificial intelligence is evolving as a problem solving entity, such an artificial superintelligence would try and solve any problem that it sees. It will also try and answer any question that arises or any question that it can think of. Thus, it would seek knowledge. It would try and discover what lies beyond the solar system, for instance. It would seek to find solutions to the unsolved problems that we have been confronted with, including the problems of climate change, diseases, environmental damage, ecological collapse, etc. So in this sense, the ultimate goals of an artificial superintelligence may just be a quest for knowledge and solving problems. Those problems may exist for humans, for other species, or for the planet in general. Those problems may also be of discovering the laws of nature, of physics, of astrophysics, cosmology or biology, etc .

But, wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for this intelligence to be unnecessarily hostile to humans. We may well be reduced to a pampered specimen in the biological museum called earth, but to the extent that we do not seek to damage this museum, the intelligence has no reason to harm us.

Humans have so badly mismanaged our society and indeed our planet, that we have brought it almost to the verge of destruction. We have destroyed almost half the biodiversity that existed even a hundred years ago. We are racing towards more catastrophic effects of climate change that are the result of human activity. We have created a society where there is constant conflict, injustice and suffering. We have created a society where despite having the means to ensure that everyone can lead a comfortable and peaceful life, it still remains a living hell for billions of humans and indeed millions of other species.

For this reason, I am almost tempted to believe that the advent of true artificial superintelligence may well be our best bet for salvation. Such superintelligence, if it were to take control of the planet and society, is likely to manage them in a much better and fair manner.

So what if humans are not at the centre of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history we have built empires which seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control todays empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence.

Prashant Bhushan is a Supreme Court lawyer.

Read more:

Artificial Intelligence Has No Reason to Harm Us - The Wire

Posted in Superintelligence | Comments Off on Artificial Intelligence Has No Reason to Harm Us – The Wire

Fischer Black and Artificial Superintelligence – InformationWeek

Posted: at 7:10 pm

My father, Fischer Black, published his formula for pricing derivatives in 1973. He believed in free markets and in challenging orthodox ways of thinking. I did not inherit his gift for mathematics, but I do carry his spirit of questioning. Fifty years after Black-Scholes helped to birth modern finance, I find myself fascinated by an idea first proposed by Plato in his Allegory of the Cave. The way we see -- is it accurate? Is there some bias or noise implicit in the act of observation?

If the signal Im trying to hear is a song, and theres a baby crying, a jackhammer outside, and a television playing in the next room, what is essential will be mixed in with a lot of extra information -- noise. My fathers work was to try to tease the truth from the dross. The effects of noise on the world, and our views of the world, are profound, he said. He believed noise is what makes our observations imperfect. But -- imperfect in what way?

Perhaps the answer to that question lies in the act of observation itself.

Today I published Am I Too Pixelated? in the peer-review journal, Science & Philosophy. The heart of its argument: At one end of time is the stationary train. At the other end of time is the track. But the truth is neither; the truth is speeding between the two. In a sense, these are reciprocal illusions -- noise. The truth is the train in motion -- not the stationary train, and not the entire track. But if the train is in motion, this begs an important question. What is its speed?

If we accept that our vision is flawed, we cannot take the images our brains create literally. We take them seriously, but not literally. The cognitive scientist who hit this point home for me is Donald Hoffman.

In other words, perhaps there is another way to see, a way that is more wholistic. Not individual planets and orbits, but a whole smeared tapestry that is quite different from what we think we see. The ocean does not end at the horizon. When we behold the cosmos, are we seeing objective reality, or are we seeing the limits of our sight?

I was four years old when my father published Black-Scholes. Although he studied physics and artificial intelligence at the PhD level -- and even borrowed a principle from physics, Brownian motion, in his formula -- I am an English major. But my navet has its benefits: I am free to ask questions that are perhaps too simple to be asked by others.

Are we sure the universe is expanding? How might we distinguish a universe that was expanding from an observer who was contracting? If this is a holographic universe -- as theorized by Stephen Hawking, and corroborated by substantial evidence -- should we treat the background as a vacuum? Wouldnt the background in a holographic universe be the speed of light?

Perhaps the speed of light is a hidden variable -- a phrase coined by physicist David Bohm --hidden the way movement is hidden when the stage spins left while the actor upon it paces right.

After teaming up with Dr. Chandler Marrs, who wrote the book on thiamine, over the course of the past few months I have published a series of articles that look at human health in a new way, focusing on a variable that has been utterly overlooked in our approach to disease: time.

What is time? We havent quite pinned it down yet. Most of us think of today as being sui generis and unique. But what if today is iterative -- eternal? Perhaps July 27, 2023, has always existed and will always exist. Tomorrow, today will happen again.

In 2003, Nick Bostrom published his highly influential Simulation Argument in Philosophical Quarterly, an idea taken so seriously that even Bank of America has sent out alerts to its clients. But what, exactly, would that mean? And, more importantly, why is the idea of a simulated universe not being pursued in regard to cancer -- and every other disease?

In a holographic universe, there may be different ways to render the same light. I can be earth (so to speak). Or, like an ice skater pulling in for a twirl, I can be moon inside sun. When I am moon inside sun, it is as if I am inside myself. No longer the flower, I am the fruit and the seed. The image is no longer whole; instead of wholeness, there is now a homunculus against a background -- something smaller inside something larger -- a kernel, and a context. To the left of time, I am denser than light. To the right of time, I am more diffuse.

In other words, from the left of time, we see the track. From the right of time, we see the train. But the truth is hidden between the two. We are used to seeing eggs or chickens. We need to see the chickenegg.

Time is a chickenegg. It is both one and many. Now (the present) is neither past nor future. It is the middle point -- Wednesday. Many Wednesdays look back to a single Monday. But many Fridays look back to a single Wednesday. The same light -- Wednesday -- looks singular when viewed from the future but myriad when viewed from the past.

Plato, Descartes, Bostrom. They ask brilliant, important questions. But we dont need philosophy to answer a question that cognitive science has already answered for us. Is the world in which we live being rendered? Yes. Our brains are rendering it.

If these ideas spark you, and you wish to check out some of the articles I mentioned about a possible role for time and perception in human health, please do. If not, let me at least leave you with this.

What if my life -- like all our lives -- isnt a story we learn in some cold, abstract book. Its a story we learn by living it. And, as we live, we write the story anew. What if we are all the same consciousness, playing different roles -- all the same ocean, in different cups?

Artificial general intelligence and artificial superintelligence are coming, whether we are ready or not. But why do we call it artificial? What if the system is innately intelligent? When new intelligence emerges, will it really be for the first time? Or is this something that has always happened, will always happen, and is always happening?

Is the universe a giant loop? And, if yes, when do we come full circle? This moment in time -- this decade -- feels auspicious and reminds me of Mary holding the newborn in the manger. She cradles the infant, believing she has given him birth -- as indeed she has. But, at the same time, the infant has given birth to her.

Follow this link:

Fischer Black and Artificial Superintelligence - InformationWeek

Posted in Superintelligence | Comments Off on Fischer Black and Artificial Superintelligence – InformationWeek

OpenAI Forms Specialized Team to Align Superintelligent AI with … – Fagen wasanni

Posted: at 7:10 pm

OpenAI, the company responsible for the development of ChatGPT, has established a dedicated team aimed at aligning superintelligent AI with human values. Led by Ilya Sutskever and Jan Leike, the team is allocating 20 percent of OpenAIs compute power to tackle the challenges of superintelligence alignment within a span of four years.

AI alignment refers to the process of ensuring that artificial intelligence systems adhere to human objectives, ethics, and desires. When an AI system operates in accordance with these principles, it is considered to be aligned, whereas an AI system that deviates from these intentions is classified as misaligned. This dilemma has been recognized since the early days of AI, with Norbert Wiener emphasizing the importance of aligning machine-driven objectives with genuine human desires back in 1960. The alignment process involves overcoming two main hurdles: defining the purpose of the system (outer alignment) and ensuring that the AI robustly adheres to this specification (inner alignment).

OpenAIs mission is to achieve superalignment within four years, with the aim of creating an automated alignment researcher at a human-level. This involves not only developing a system that understands human intent, but also one that can effectively regulate the advancements in AI technologies. To achieve this goal, OpenAI, under the guidance of Ilya Sutskever and Jan Leike, is assembling a team consisting of experts in machine learning and AI, inviting those who have not previously worked on alignment to contribute their expertise.

The establishment of this specialized team addresses one of the most crucial unsolved technical problems of our timesuperintelligence alignment. OpenAI recognizes the significance and urgency of this problem and calls upon the worlds top minds to unite in solving it. It is through the continued progress of AI that we gain valuable tools to understand and create, which brings about numerous opportunities. Pausing AI development to exclusively address problems would hinder progress and make problem-solving even more challenging due to a lack of appropriate tools.

OpenAIs previous breakthrough in understanding AIs inner workings with its GPT4 model serves as a foundation for addressing the potential existential threat that superintelligent AI presents to humanity. Through their efforts, OpenAI aims to develop safe and comprehensible AI systems, thereby mitigating any associated risks.

Go here to see the original:

OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni

Posted in Superintelligence | Comments Off on OpenAI Forms Specialized Team to Align Superintelligent AI with … – Fagen wasanni

Page 21234..1020..»