Page 3«..2345..1020..»

Category Archives: Superintelligence

10 Best Books on Artificial Intelligence | TheReviewGeek … – TheReviewGeek

Posted: August 2, 2023 at 7:10 pm

So, you want to dive deeper into the world of artificial intelligence? As AI continues to transform our lives in so many ways, gaining a better understanding of its concepts and capabilities is crucial. The field of AI is vast, but some books have become classics that every curious reader should explore. Weve compiled a list of 10 groundbreaking books on artificial intelligence that will boost your knowledge and feed your fascination with this fast-growing technology.

From philosophical perspectives on superintelligence to practical applications of machine learning, these books cover the past, present, and future of AI in an accessible yet compelling way. Whether youre a beginner looking to learn the basics or an expert wanting to expand your mind, youll find something inspiring and thought-provoking in this list. So grab a cup of coffee, settle into your favourite reading spot, and lets dive in. The future is here, and these books will help prepare you for whats to come.

Nick Bostroms Superintelligence is a must-read if you want to understand the existential risks posed by advanced AI.

This thought-provoking book argues that once machines reach and exceed human-level intelligence, an intelligence explosion could occur. Superintelligent machines would quickly become vastly smarter than humans and potentially uncontrollable.

Max Tegmarks thought-provoking book explores how AI may change our future. He proposes that artificial general intelligence could usher in a new stage of life on Earth.

As AI systems become smarter and smarter, they may eventually far surpass human intelligence. Tegmark calls this hypothetical point the singularity. After the singularity, AI could design even smarter AI, kicking off a rapid spiral of self-improvement and potentially leading to artificial superintelligence.

The Master Algorithm by Pedro Domingos explores the quest for a single algorithm capable of learning and performing any task, also known as the master algorithm. This book examines the five major schools of machine learningsymbolists, connectionists, evolutionaries, Bayesians, and analogizersexploring their strengths and weaknesses.

Domingos argues that for AI to achieve human-level intelligence, these approaches must be combined into a single master algorithm. He likens machine learning to alchemy, with researchers combining algorithms like base metals to produce gold in the form of human-level AI. The book is an insightful overview of machine learning and its possibilities. While the concepts can be complex, Domingos explains them in an engaging, accessible way using colourful examples and analogies.

In his book The Future of the Mind, theoretical physicist Michio Kaku explores how the human brain might be enhanced through artificial intelligence and biotechnology.

Kaku envisions a future where telepathy becomes possible through electronic implants, allowing people to exchange thoughts and emotions. He also foresees the eventual mapping and understanding of the human brain, which could enable the transfer of memories and even consciousness into new bodies.

In his 2012 New York Times bestseller, futurist Ray Kurzweil makes the case that the human brain works like a computer. He argues that recreating human consciousness is possible by reverse engineering the algorithms of the brain.

Kurzweil believes that artificial general intelligence will soon match and eventually far surpass human intelligence. He predicts that by the 2030s, we will have nanobots in our brains that connect us to synthetic neocortices in the cloud, allowing us to instantly access information and expand our cognitive abilities.

Martin Fords Rise of the Robots is a sobering look at how AI and automation are transforming our economy and job market. Ford argues that AI and robotics will significantly disrupt labour markets as many jobs are at risk of automation.

As AI systems get smarter and robots become more advanced, many human jobs will be replaced. Ford warns that this could lead to unemployment on a massive scale and greater inequality. Many middle-income jobs like cashiers, factory workers, and drivers are at high risk of being automated in the coming decades. While new jobs will be created, they may not offset the jobs lost.

In Homo Deus, Yuval Noah Harari explores how emerging technologies like artificial intelligence and biotechnology will shape the future of humanity.

Harari argues that humanitys belief in humanism the idea that humans are the centre of the world will come to an end in the 21st century. As AI and biotech advance, humans will no longer be the most intelligent or capable beings on the planet. Machines and engineered biological life forms will surpass human abilities.

Kai-Fu Lees 2018 book AI Superpowers provides insightful perspectives on the rise of artificial intelligence in China and the United States. Lee argues that while the US currently leads in AI research, China will dominate in the application of AI technology.

As the former president of Google China, Lee has a unique viewpoint on AI ambitions and progress in both countries. He believes Chinas large population, strong technology sector, and government support for AI will give it an edge. In China, AI is a national priority and a core part of the governments long-term strategic planning. There is no shortage of data, given Chinas nearly 1 billion internet users. And top tech companies like Baidu, Alibaba, and Tencent are investing heavily in AI.

This classic book by Stuart Russell and Peter Norvig established itself as the leading textbook on AI. Now in its third edition, Artificial Intelligence: A Modern Approach provides a comprehensive introduction to the field of AI.

The book covers the full spectrum of AI topics, including machine learning, reasoning, planning, problem-solving, perception, and robotics. Each chapter has been thoroughly updated to reflect the latest advances and technologies in AI. New material includes expanded coverage of machine learning, planning, reasoning about uncertainty, perception, and statistical natural language processing.

This book provides an accessible introduction to the mathematics of deep learning. It begins with the basics of linear algebra and calculus to build up concepts and intuition before diving into the details of deep neural networks.

The first few chapters cover vectors, matrices, derivatives, gradients, and optimizationessential math tools for understanding neural networks. Youll learn how to calculate derivatives, apply gradient descent, and understand backpropagation. These fundamentals provide context for how neural networks actually work under the hood.

There we have it, our list of 10 best books on AI. What do you think about our picks? Let us know your thoughts in the comments below:

Read the original:

10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek

Posted in Superintelligence | Comments Off on 10 Best Books on Artificial Intelligence | TheReviewGeek … – TheReviewGeek

The Concerns Surrounding Advanced Artificial Intelligence and the … – Fagen wasanni

Posted: at 7:10 pm

Every day, there are new warnings about the dangers of advanced artificial intelligence (AI) and the need for regulation. Researchers and experts in the field, including Geoffrey Hinton, Yoshua Bengio, Eliezer Yudkowsky, Nick Bostrom, and Douglas Hofstadter, have expressed concerns about the exponential growth of AI surpassing human intelligence. This presents a major challenge known as the control problem.

Once AI becomes capable of improving itself, it is expected to quickly surpass human intelligence in every aspect. This raises the question of what it means to be a billion times more intelligent than a human. The best-case scenario would be benign neglect, where humans are insignificant to superintelligent AI. However, it is unlikely that humans will be able to control or anticipate the actions of such an entity.

The control problem is considered unsolvable due to the nature of superintelligent AI. Current AI systems are black box in nature, meaning that neither humans nor the AI can explain or predict the decision-making process. Verification of the AIs choices becomes impossible, and humans are left unable to understand the AIs intentions or plans.

The precautionary principle suggests that companies should provide proof of safety before deploying AI technologies. However, many companies have released AI tools without adequately establishing their safety. The burden should be on companies to demonstrate that their AI products are safe, rather than on the public to prove otherwise.

The development of recursively self-improving AI, which is being pursued by many companies, poses the greatest risk. It could lead to an intelligence explosion or singularity where the AIs abilities become unpredictable. The consequences of such superintelligence are unknown and may have severe implications for human well-being and survival.

Addressing these concerns, scientists and engineers are working to develop solutions. Efforts are being made to implement measures like watermarking AI-generated text to verify its source. However, the attention given to these issues might be insufficient and late in the game.

Considering the ethical implications, it is essential to not only prioritize the welfare of present humans but also future generations. The risks associated with AI need to be assessed over long periods of time to ensure the safety and well-being of the entire human existence.

With the stakes incredibly high, it is crucial to find answers to these concerns before its too late. The development of advanced artificial intelligence poses significant challenges that require careful consideration and action from both researchers and society at large.

Link:

The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni

Posted in Superintelligence | Comments Off on The Concerns Surrounding Advanced Artificial Intelligence and the … – Fagen wasanni

Decentralized AI: Revolutionizing Technology and Addressing … – Fagen wasanni

Posted: at 7:10 pm

The field of artificial intelligence (AI) has made significant strides, but many still struggle to grasp its implications. The terms narrow AI, superintelligence, and artificial general intelligence (AGI) are now commonly used, alongside machine learning and deep learning. Companies across industries have embraced AI to streamline their operations, benefitting businesses and individuals alike.

However, as AI becomes more advanced and desired, concerns about centralization and its potential risks arise. It is feared that a few organizations with access to AI could control its development, leading to negative consequences. To address these concerns, the concept of decentralized AI has emerged.

Decentralized AI allows individuals to have more influence over the AI products they use and offers a wider range of models to choose from. By incorporating blockchain technology, decentralized AI ensures security and transparency. Public blockchains, governed by the community rather than a central authority, foster trust and code enforceability. There are already over 50 blockchain-based AI companies, with exponential growth expected in the future.

Decentralized AI also empowers the community to participate in the development and direction of AI models. Democratic governance gives users a say in how AI models operate, a crucial difference from centralized AI. Engaging the community eases concerns and fosters comfort with AI technology.

While there are challenges, such as the opacity of AI models and the lack of transparency, solutions are emerging. Explainable AI (XAI) and open-source models offer potential ways to address the black box problem of decentralized AI, promoting transparency and trust.

Decentralized AI offers several benefits, including enhanced security through blockchain encryption and immutability. It proactively detects anomalies in data, alerting users to potential breaches. Decentralization, with data distributed across multiple nodes, minimizes vulnerability to unauthorized access and tampering.

Decentralized AI is revolutionizing technology and addressing concerns by empowering individuals, ensuring transparency, and enhancing security. By embracing decentralized AI, society can harness the full potential of AI while mitigating risks associated with centralization.

Read more from the original source:

Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni

Posted in Superintelligence | Comments Off on Decentralized AI: Revolutionizing Technology and Addressing … – Fagen wasanni

An ‘Oppenheimer Moment’ For The Progenitors Of AI – NOEMA – Noema Magazine

Posted: at 7:10 pm

Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

The movie directorChristopher Nolansays he has spoken to AI scientists who are having an Oppenheimer moment, fearing the destructive potential of their creation. Im telling the Oppenheimer story, he reflected on his biopic of the man, because I think its an important story, but also because its absolutely a cautionary tale.Indeed, some are already comparing OpenAIs Sam Altman to the father of the atomic bomb.

Oppenheimer was called the American Prometheus by his biographers because he hacked the secret of nuclear fire from the gods, splitting matter to release horrendous energy he then worried could incinerate civilization.

Altman, too, wonders if he did something really bad by advancing generative AI with ChatGPT. He told a Senate hearing, If this technology goes wrong, it can go quite wrong. Gregory Hinton, the so-called godfather of AI, resigned from Google in May saying part of him regretted his lifes work of building machines that are smarter than humans. He warned that It is hard to see how you can prevent the bad actors from using it for bad things. Others among his peers have spoken of the risk of extinction from AI that ranks with other existential threats such as nuclear war, climate change and pandemics.

For Yuval Noah Harari, generative AI may be no less a shatterer of societies, or destroyer of worlds in the phrase Oppenheimer cited from the Baghavad Gita, than the bomb. This time sapiens have become the gods, siring inorganic offspring that may one day displace their progenitors. In a conversation some years ago, Harari put it this way: Human history began when men created gods. It will end when men become gods.

As Harari and co-authors Tristan Harris and Aza Raskin explained in a recent essay, In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.

They went on:

For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.

Soon we will finally come face to face with Descartess demon, with Platos cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away or even realize it is there.

This prospect of a nonhuman entity writing our narrative so alarms the Israeli historian and philosopher that he urgently advises that sapiens stop and think twice before we relinquish the mastery of our domain to technology we empower.

The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it, he, Harris and Raskin warn. If we wait for the chaos to ensue, it will be too late to remedy it.

Writing in Noema, Google vice president Blaise Agera Y Arcas and colleagues from the Quebec AI Institute dont see the Hollywood scenario of a Terminator event where miscreant AI goes on a calamitous rampage anywhere on the near horizon. They worry instead that focusing on an existential threat in the distant future distracts from mitigating the clear and present dangers of AIs disruption of society today.

What worries them most is already at hand before AI becomes superintelligent: mass surveillance, disinformation and manipulation, military misuse of AI and the replacement of whole occupations on a widespread scale.

For this group of scientists and technologists, Extinctionfrom a rogue AI is an extremely unlikely scenario that depends on dubious assumptions about the long-term evolution of life, intelligence, technology and society. It is also an unlikely scenario because of the many physical limits and constraints a superintelligent AI system would need to overcome before it could go rogue in such a way. There are multiple natural checkpoints where researchers can help mitigate existential AI risk by addressing tangible and pressing challenges without explicitly making existential risk a global priority.

As they see it, Extinction is induced in one of three ways: competition for resources, hunting and over-consumption or altering the climate or their ecological niche such that resulting environmental conditions lead to their demise. None of these three cases apply to AI as it stands.

Above all, For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek topreservehumanity since we are as fundamental to AIs existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition.

To assign an infinite cost to the unlikely outcome of extinction would be akin to turning all our technological prowess toward deflecting a one-in-a-million chance of a meteor strike on Earth as the planetary preoccupation. Simply, existential risk from superintelligent AI does not warrant being a global priority, in line with climate change, nuclear war, and pandemic prevention.

Any dangers, distant or near, that may emerge from competition between humans and budding superintelligence will only be exacerbated by rivalry among nation-states.

This leads to one last thought on the analogy between Sam Altman and Oppenheimer, who in his later years was persecuted, isolated and denied official security clearance because the McCarthyist fever of the early Cold War cast him as a Communist fellow traveler. His crime: opposing the deployment of a hydrogen bomb and calling for working with other nations, including adversaries, to control the use of nuclear weapons.

In aspeechto AI scientists in Beijing in June, Altman similarly called for collaboration on how to govern the use of AI. China has some of the best AI talents in the world, he said. Controlling advanced AI systems requires the best minds from around the world. With the emergence of increasingly powerful AI systems, the stakes for global cooperation have never been higher.

One wonders, and worries, how long it will be before Altmans sense of universal scientific responsibility is sucked, like Oppenheimer, into the maw of the present McCarthy-like anti-China hysteria in Washington. No doubt the fervent atmosphere in Beijing poses the mirror risk for any AI scientist with whom he might collaborate on behalf of the whole of humanity instead of for the dominance of one nation.

At the top of the list of clear and present dangers posed by AI is how it might be weaponized in the U.S.-China conflict. As Harari warns, the time to reckon with such a threat is now, not when it is an eventuality realized and too late to roll back. Responsible players on both sides need to exercise the wisdom that cant be imparted to machines and cooperate to mitigate risks. For Altman to suffer the other Oppenheimer moment would bring existential risk ever closer.

One welcome sign is that U.S. Secretary of State Antony Blinken and Commerce Secretary Gina Raimondo acknowledged this week that no country or company can shape the future of AI alone. [O]nly with the combined focus, ingenuity and cooperation of the international community will we be able to fully and safely harness the potential of AI.

So far, however, the initiatives they propose, essential as they are, remain constrained by strategic rivalry and limited to the democratic world. The toughest challenge for both the U.S. and China is to engage each other directly to blunt an AI arms race before it spirals out of control.

Read more:

An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine

Posted in Superintelligence | Comments Off on An ‘Oppenheimer Moment’ For The Progenitors Of AI – NOEMA – Noema Magazine

The Implications of AI Advancements on Human Thinking and … – Fagen wasanni

Posted: at 7:10 pm

With the rapid advancement of Artificial Intelligence (AI), there is growing concern about the potential consequences for human thinking capabilities. According to a report from investment bank Goldman Sachs, AI has the potential to replace approximately 300 million full-time jobs, leading to speculations about AI replacing humans in various fields and jeopardizing the uniqueness of human abilities.

Some AI developers claim that their tools can write, draw, and create content for users. However, there are concerns that this may hinder humans from thinking uniquely and creatively. For instance, there are worries that AI could harm English creative writing as it may be seen as a shortcut. Ishfaq Raazi, a 21-year-old poet who writes in Urdu, believes that AI can never fully grasp the depth of knowledge required for poetry, including various genres and meters.

Virtual learning platforms like Udemy offer courses to learn about Chat GPT, an AI-powered tool. While these platforms aim to attract more users, Raazi warns that using Chat GPT may diminish human passion and curiosity, as it takes away the instigative and pondering aspects of creative writing.

AIs impact on professions is also evident, with copywriters like Emily Hanley losing their jobs to AI-generated work. Hanley states that the collapse of her profession is just the beginning, as artists and creatives are not immune to automation. If a robot can do a job more cost-effectively, it is likely to replace human workers.

However, some individuals firmly believe that AI can never completely replace the human mind. Short story writer Rehana Shajar argues that AI lacks genuine emotions and empathy, which are crucial for creative writing. She suggests that AI can be embraced as a tool by writers, similar to past technologies that have been used for self-improvement.

The use of AI in journalism raises complex ethical questions, particularly regarding transparency, bias, and the role of human journalists. Trust, accuracy, accountability, and bias remain major ethical concerns in AI development. The possibility of replacing reporters with chatbots in newsrooms is becoming more plausible, with many news organizations already introducing virtual newscasters for social media.

Vijay Shekhar Sharma, CEO of PayTm, expressed concerns over the potential dangers of superintelligence. The arrival of superintelligence within this decade could have significant impacts on humanity.

As AI continues to advance, it is crucial to carefully consider its implications. While it may have numerous benefits, there are legitimate concerns about its impact on human thinking and creativity.

See original here:

The Implications of AI Advancements on Human Thinking and ... - Fagen wasanni

Posted in Superintelligence | Comments Off on The Implications of AI Advancements on Human Thinking and … – Fagen wasanni

Focusing on Tackling Algorithmic Bias is Key to Ethical AI … – Fagen wasanni

Posted: at 7:09 pm

AI ethicist Alice Xiang argues that while killer robots and superintelligence may be concerns for the future, the immediate and insidious harms being caused by artificial intelligence (AI) lie in algorithmic biases and inequalities. Xiang emphasizes the need to address existing biases in AI systems that entrench societal biases and perpetuate inequalities. Biased algorithms have been found to discriminate against marginalized groups such as women and people of color, often due to skewed training data. Xiang warns that as AI becomes more pervasive in high-stakes domains like healthcare, employment, and law enforcement, these biases can compound and create a more unequal society.

Xiang suggests that it is crucial to prioritize addressing these algorithmic biases over speculative long-term threats. She believes that preventing existential threats is achieved by identifying and mitigating the concrete harms that currently exist. Despite efforts by industry players to establish ethical practices for AI development, algorithmic bias remains a persistent problem that has not been systematically fixed. Xiang highlights incidents like Googles mislabelling of photos and biased recruitment algorithms as examples of ongoing bias in AI systems.

Xiang also raises concern about the representational biases found in generative AIs, which reproduce stereotypes present in the training data. She points out that if image generators are biased, it can influence creators who use them for inspiration in creative fields. Efforts to prevent these harms are still in the early stages, with companies relying on varying levels of self-governance due to the lack of comprehensive government oversight.

To address these issues, Xiang emphasizes the need for companies to prioritize AI ethics from the early stages of development. She calls for increased investment in the underfunded field of AI ethics to ensure that developers have the necessary knowledge and tools to address biases and make ethical decisions. By focusing on tackling algorithmic bias, the industry can work towards developing more ethical AI systems that serve all members of society.

Continue reading here:

Focusing on Tackling Algorithmic Bias is Key to Ethical AI ... - Fagen wasanni

Posted in Superintelligence | Comments Off on Focusing on Tackling Algorithmic Bias is Key to Ethical AI … – Fagen wasanni

Discover This SUPER Early AI Crypto Gem – Altcoin Buzz

Posted: at 7:09 pm

Dont Miss this again! Last week we shared this massive IDO opportunity with you. I am not sure how many of you got in early but those who did Congrats! Your bag pumped 782.6%.

Now that the coin has already pumped why am I talking about it? Because for those who did not get in, I dont want you to miss this early gem opportunity again.Take a look at this. At IDO, $SOPH, Sophiverse token was $0.03. Now its trading close to $0.20 which is a 643% price pump. But, still an early stage to get into it because Sophiverse is about to SHOCK the crypto industry. Lets discover more about this AI crypto gem.

When the legendary actor Will Smith could not resist the Charm of Real Sophia Robot, how could we resist Sophiaverse? The futuristic metaverse allows you to interact with your own personal Sophia as she grows with you and forms a unique personality.

That looks so futuristic! But is it just another AI gimmick or is something really shocking coming up? The next couple of sections are all from a research perspective. But if you want to know my honest opinion watch the video till the end.

For those of you who dont know let me explain Sophiaverse to you in the simplest possible way. Without knowing what the project is about, you cannot understand the actual potential of the $SOPH token.SophiaVerse is an NFT and game-based marketplace for integrating games into the metaverse.

Essentially, a gaming platform supported by AI technology that seeks to enable human interaction with superintelligence facilitated by the SOPH utility token. Simply put, Sophiaverse is like a whole game where you can teach Sophia, learn from her, and even monetize your data through NFTs.

Imagine having your own unique Sophia AI companion that you can customize and train based on your preferences. And the best part is, you can trade and sell your Sophias traits to others.

In short, Sophiverse brings in two most bullish crypto narratives. AI + Web3 Gaming together for a Super bullish future.

We know the AI crypto narrative has kind of cooled off. Then why did Sophiverse such catch massive attention? It is because the Sophiverse ecosystem was developed by David Hanson, the creator of Sophia, the humanoid robot, and Ben Goertzel, a cognitive scientist who developed the explosive AI Blockchain project SingulairtyNET,SingularityDAO, and Hansan Robotics. They are allpartners of Sophiaverse.

So definitely, the project is backed by successful leaders and when I looked at the roadmap, undoubtedly Sophiverse will shock us all.

Before I tell you all about the utility of $SOPH, we need to understand what is there in the Sophiaverse ecosystem. The Sophiaverse ecosystem includes:

SOPH staking is already live and for some pairs like ETH and BNB, the APR is as high as 159%, 379%.

Now we can understand the SOPH tokens utility. Because unless a token has lots of utility in its ecosystem, it doesnt pump even if the ecosystem is growing. Now, here are 6 major utilities in the Sophiverse ecosystem:

All in all entire Sophiverse ecosystem revolves around the SOPH token which means as the ecosystem becomes popular and grows trading of $SOPH will increase and hence $SOPH has quite a high chance of going to the moon in the coming bullrun. But wait will Sophiverse really catch up and become popular?To understand that, lets take a look at its roadmap:

Venture Capitalist have been investing in this segment for some time. Investor interest means a big opportunity lies there. Some of the direct competitors to Sophiverse include Ultiverse, a direct competitor funded by Binance Labs. Other competitors include Altered Stae Machine. It will be interesting to see how this segment evolves.

The project seems solid with collaborations from renowned AI developers and partnerships with Hanson Robotics and Singularitynet. They share the goal of open-source and accessible AI. The utility of the token looks promising too, serving as in-game currency, a means for upgrades, and more.

Interesting to see how theyre making AI fun and accessible for all ages and skill levels. The idea of cross-media and cross-game compatibility is itself very intriguing to me. Their successful launch and 7 million market cap show theres genuine interest in the project.

Im really bullish on this whole Singularity net ecosystem, and I cant wait to see how the Sophiverse unfolds. But make sure you do your own research before making any investments.

For more cryptocurrency news, check out theAltcoin BuzzYouTube channel.

Our popularAltcoin Buzz Access group generates tons of alpha for our subscribers. And for a limited time, its Free. Click the link and join the conversation today.

See the rest here:

Discover This SUPER Early AI Crypto Gem - Altcoin Buzz

Posted in Superintelligence | Comments Off on Discover This SUPER Early AI Crypto Gem – Altcoin Buzz

Biden meets with AI leaders to discuss its ‘enormous promise and its … – KULR-TV

Posted: June 20, 2023 at 8:40 pm

(The Center Square) President Joe Biden held an event in California Tuesday to discuss the future of artificial intelligence and what regulations may be enacted to rein it in.

Biden hosted the meeting with federal officials, AI experts and governors to discuss AIs enormous promise and its risks.

As Ive said before, we will see more technological change in the next 10 years than weve seen in the last 50 years, and maybe even beyond that, Biden told reporters at the event at a San Francisco hotel.

AI is already driving that change in every part of American life, often in ways that we dont notice, Biden said. AI is already making it easier to search the internet, helping us drive to our destinations while avoiding traffic in real time.

Biden gave a nod to the risks to our society, our economy and our national security. In October of last year, Biden released an AI Bill of Rights. He also signed an executive order earlier this year to fight bias in the design of AI.

While advanced AI has the ability to operate independently from its designers once it is set up, those designers can build certain biases or political slants into how the AI processes information and responds to requests.

Biden pointed to this as an opportunity for spreading misinformation. After giving his remarks, he asked media to leave the room for the official meeting.

Biden held the event after the release of ChatGPT, a new technology where users can interact with artificial intelligence in a more significant way. The technology was considered a major breakthrough for AI and spread quickly in popularity in part because of its ability to apparently think creatively and do things like write entire elaborate poems in just seconds.

The breakthrough has resurfaced concerns that AI could be used for an array of harmful purposes, ranging from malicious use from foreign powers or companies to indirect consequences like lost jobs. Experts say AI could also begin acting in interests contrary to its creators and humans in general, even without its creators being aware of it.

Billionaire Elon Musk, who helped found the company that later created ChatGPT after his exit, has called for a pause in the development of AI until regulations are enacted. He echoed that sentiment during a Twitter Spaces event as part of a Viva Technology conference last week.

We could have a potentially catastrophic outcome," Musk said, adding that while AIs impact will likely be positive, we need to minimize the possibility that something could go wrong with digital superintelligence.

Last month, Biden met with Alphabet, Anthropic, Microsoft, and OpenAI, the company that developed ChatGPT. The White House said in a statement that the meeting was to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society.

Last month's meeting coincided with the White Houses announcement of $140 million in AI research and development funding to be made available through the National Science Foundation.

View post:

Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV

Posted in Superintelligence | Comments Off on Biden meets with AI leaders to discuss its ‘enormous promise and its … – KULR-TV

VivaTech: The Secret of Elon Musk’s Success? ‘Crystal Meth’ – The New Stack

Posted: at 8:40 pm

Elon Musk was soaking up the adulation Friday at Paris Viva Technology, Europes largest startup and tech conference. In a four-day confab that began Wednesday, VivaTech has featured French President Emmanuel Macron, Salesforce CEO Marc Benioff, and Yann LeCun, a Meta scientist and Turing Award winner.

But Musk was clearly the star invitee, with the keen interest in his appearance necessitating a move from a smaller venue to the 4,000-seat Dme de Paris, most frequently used for musicals. (His mother, Maye Musk, was in the audience.)

To hoots of approval from the audience, a fawning Maurice Lvy, chairman of the Publicis Group, an advertising company, invited Musk to sing and dance, if he wanted. Lvy then said, Your name is a brand. Its a brand for innovation, for ambition

For perfume, Musk interrupted.

Levy continued, You have been always proven right.

Not always, Musk chuckled.

Then Lvy asked his first question: Will you still be right with Twitter?

Sure, it was expensive, Musk answered, to audience laughter. (The CEO of Tesla and SpaceX paid a reported $44 billion for the social media outlet. In May, the asset manager Fidelity marked down its equity stake in the company, placing the overall value of the X Holdings Corp., Twitters parent company, at roughly $15 billion.)

Listen, if Im so smart, why did I pay so much for Twitter? A question he never answered during the hourlong conversation.

Some subjects he did address during the interview which included questions from representatives from large French corporations (LOral, Orange, LVMH), and, in an impromptu and chaotic session at the end, from the audience follow. (Quotes have been edited for length and clarity.)

What drives him: Crystal meth is the answer. If you think Red Bull gives you wings. Just kidding, for the record.

The companies still have a lot to do for their core mission. For electric vehicles, sustainable energy, its still, less than 1% of the global fleet is electric. So youve got about 2 billion cars and trucks on the road, but still less than 20 million are electric at this point. So this is a long way to go for sustainable energy, for sustainable energy generation.

[For] the Tesla mission, I think were weve made a lot of progress, but still its a lot more ahead. Then SpaceX, the goal is a big goal, but we want to try to make life multi-planetary, to extend life beyond Earth. And I think this is important for a number of reasons.

The light of consciousness: It appears that we might just be the only consciousness, at least in this galaxy. And if so, thats kind of a scary prospect, because it means that the light of consciousness is like a tiny candle in a vast darkness. And we should do everything we can to prevent that candle from going out. [applause from the audience] So that means obviously taking the actions to ensure that Earth is good, that Earth is safe and secure for civilization.

Growing up and The Hitchhikers Guide to the Galaxy: The thing that was maybe most significant from a philosophical standpoint was that when I was about maybe 12 or 13, I had somewhat of an existential crisis where I was like, What is the meaning of life? Is life just meaningless? Why are we here? What does it all mean?

And I read a lot of books on religion and philosophy and then ultimately, I read this book Hitchhikers Guide to the Galaxy, which is great. That book is really a philosophy book thats disguised as humor. And the point that [author] Douglas Adams makes is that the real difficulty is understanding what questions to ask about the answer that is the universe.

What he learned from Douglas Adams: Its essentially a philosophy of curiosity, of saying, What can we do to find out more about the nature of the universe and the meaning of life? And so thats the foundational element. And then from there you say, OK, well, if we want to find out the meaning of life, we have to expand the scope and scale of consciousness. We have to go out there, and we can explore the stars to know what questions to ask about the universe and understand the universe.

Its from that sort of core philosophy that these companies arise in most cases. You might say, How does Twitter help with that?

His prediction of Teslas failure: There was a need for Tesla because at the time of starting Tesla, there were no electric vehicles being made, and the big car companies were not making electric vehicles. There were no startups that we were aware of making electric vehicles. So its like, well, we should try.

And in the case of both Tesla and SpaceX, I thought the chance of success was maybe 10%. So its not like I thought it would be successful. I thought it would fail.

The risk of AI: I think theres a real danger for digital superintelligence having negative consequences. And so if we are not careful with creating artificial general intelligence, we could have potentially a catastrophic outcome. I think theres a range of possibilities.

The most likely outcome is positive for AI, but thats not every possible outcome. So we need to minimize the probability that something will go wrong with digital superintelligence. So Im in favor of AI regulation because I think that AI is a risk to the public.

Changes at Twitter: I think that most people would say that their experience has improved. Weve gotten rid of 90% of the bots and the scams. Weve gotten rid of, I think, 95% of the child exploitation material that was on Twitter which was a shock to see, but the amount of that was really terrible. Some of that had been going on for 10 years with no action.

We have open sourced the algorithm, so were trying to be as transparent as possible. So Twitter is the only social media company where you can see the actual code of the algorithm. So its not like some secret black box. [audience applause] The way to build trust is not, Take my word for it. Its, Lets show you exactly how it works and full transparency.

Are Twitter advertisers back?: Maybe with a few exceptions, almost all the advertisers have either come back or they said they will come back.

Twitter, a positive force in the universe: The overarching goal is to have Twitter be a force, a positive force for civilization. And, so if youre on the platform and youre being harassed or bullied or whatever, obviously thats a negative experience.

What were doing is what we call freedom of speech, but not freedom of reach. Yes, you can say offensive things, but then your content is going to get downgraded. [chuckled] So if youre a jerk, your reach will drop. [chuckled] So, yeah, I think thats the right thing. [chuckled, audience applause]

In response to a childs question about Neuralink, Musks company that is developing implantable brain-computer interfaces: First of all I want to assure everyone who may be worried about Neuralink, Neuralink is going to be a fairly slow process because anything thats done in humans, its very slow. So sometimes people think that suddenly were going to be ripping open ones head and then before you know it, everyones connected to the internet, and then were in trouble.

Hopefully later this year well do our first human device implantation. And this will be for someone [who is] tetraplegic, quadriplegic, [who] has lost the connection from their brain to their body. And we think that person will be able to communicate as fast as someone who has a fully functional body. So thats going to be a big deal.

Read the original here:

VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack

Posted in Superintelligence | Comments Off on VivaTech: The Secret of Elon Musk’s Success? ‘Crystal Meth’ – The New Stack

AI meets the other AI – POLITICO – POLITICO

Posted: at 8:40 pm

With help from Derek Robertson

A sign directs travelers to the start of the "1947 UFO Crash Site Tours" in Roswell, N.M., on June 10, 1997. | Eric Draper/AP Photo

If the explosion of artificial intelligence werent mind-boggling enough, Washington is now confronting the possibility of another, weirder, AI: alien intelligence.

After a former intelligence official went public earlier this month with claims that he was told by other officials of a secret government program that possesses downed alien spacecraft, the House Oversight Committee has announced plans for a hearing on the matter.

And former deputy assistant secretary of defense for intelligence Christopher Mellon, now with Harvards alien-focused Galileo Project, wrote in POLITICO Magazine that he has referred four people to the Pentagons UFO office who say they have knowledge of secret government efforts to study off-world craft.

The Pentagon has said its UFO program has not discovered any verifiable information to substantiate claims about downed craft, and many stories about aliens and UFOs have been shown to result from some combination of delusion, confusion and disinformation.

But here at DFD, we like to keep an open mind.

After all, magic internet money, killer robots and AI itself were all the stuff of futuristic sci-fi before they became political hot potatoes in the present.

And it turns out that AI, in particular, has a thing or two to teach us about the possible existence of its extraterrestrial cousin.

To help us wrap our heads around this, we caught up with Ravi Starzl, an AI-focused computer science professor at Carnegie Mellon. Starzl is also an adviser to Americans for Safe Aerospace, an advocacy group founded by former Navy fighter pilot Ryan Graves, who has been calling attention to the UFO phenomenon since reporting a series of sightings that took place in 2014 and 2015 to Congress and the Pentagon.

At a practical level, how can AI help with identifying UFOs?

Ive been helping some organizations develop machine learning algorithms and systems for being able to identify and characterize unknown aerial phenomena based on multimodal domains of data. So visual, textual, audio, radar.

Machine learning, and AI in particular, will be able to process the vast quantities of information that exist, make sense of it, and actually turn it into interpretable insights and even actionable information

You need to be able to separate hoaxes and fakes from genuine phenomena and machine learning is extremely useful for that.

At a more abstract level, Ryan Graves has argued that the process occurring right now, in which human societies are grappling with the rise of AI, will prepare them to grapple with the possible existence of alien intelligence. What do you think?

Its dead on.

A real value in the current craze is its forcing people to start to think about the fact that we are not the only cognitive entities operating in our world anymore. Theyre still not that sophisticated compared to where the fundamentals of that technology can take it. But we can still have a conversation with it right now and it can do work for us and it can give us ideas we didnt have.

That process of learning how to interact with a fundamentally alien, if you will, intelligence is going to open the whole zeitgeist up.

It sounds like an exciting time to be studying intelligence.

Were going to be very busy and living in very interesting times for the next 20 years as these things start to merge, diverge, and get analyzed and brought more into the mainstream.

When you say these things Do you mean human-created AI, Or are you also talking about possible alien intelligence?

I guess in my mind, Im having a hard time seeing the difference.

So, at a certain level, Its all just different forms of intelligence?

This is a question that has been wrestled with, What does it mean to have the other?

At some level, two humans are alien intelligences. Because one, they each have their own cognitive sphere. They each have their own mental models of reality. And they have to exchange information in order to collaborate.

That same phenomenon, like a matryoshka doll, just continues outward when youre dealing with super-organisms like societies.

And then from there you have formations interacting with other formations at the superintelligence level. So in many respects the question of, Is there alien intelligence and how would we deal with it? has already been answered definitively. Yes, because its already with us.

But now the question becomes how exotic, what processes created it, and how do we establish a more efficient or more consistent or coherent or safe way of interacting with it and understanding it and learning from it?

A message from American Edge Project:

How American Values Can Keep The Global Internet Free, Open And Accessible Since the early days of the internet, the United States has led the world in advocating to keep it free and open. America has championed the values of free expression and open trade, of participatory governance, and of technological advancement that promotes freedom, opportunity, and equality. Read our policy report here.

POLITICO illustration/Photos by Wikimedia Commons, iStock

Sometimes in order to steer the future, you need to learn a little bit from the past.

Writing in POLITICO Magazine, this weekend, Vanderbilt University professor Ganesh Sitaraman proposes that lawmakers wrangling with how to regulate young Americas favorite Chinese-owned app TikTok reflect on some pre-World War II American history.

[D]ebates over foreign ownership of the means of communication is part of an important history and tradition in American law, Sitaraman writes, arguing that lawmakers should take a platform-utilities approach to TikTok that would ensure American influence over its governance.

If lawmakers want to take a lesson from the long American tradition of regulated capitalism, they should advance comprehensive legislation to regulate tech platforms more like public utilities, Sitaraman writes. Such legislation should include restrictions on foreign ownership and control, which could apply to all tech platforms from adversarial countries. Comprehensive legislation should also include sectoral standards that apply to U.S. firms as well standards not just on data collection, surveillance and privacy, but also against anti-competitive behavior, all tech policy topics that have relevance far beyond just TikTok itself. Derek Robertson

A message from American Edge Project:

The Apple Vision Pro headset is displayed in a showroom on the Apple campus Monday, June 5, 2023, in Cupertino, Calif. (AP Photo/Jeff Chiu) | AP

If Apples Reality Pro headset does turn out to be the future-defining device that finally gets virtual and augmented reality into American homes, we might not begin to see the effects for a long while.

Thats what tech analyst Benedict Evans predicts in a new essay, comparing it to the iPhone and writing We know today that the iPhone worked, but Apple still had to change the business model, expand distribution and build a lot more product. Sales didnt really take off for five years and the launch was pretty soft. [I]t seems unlikely that this will be as big as the iPhone in the next few years, and more likely even then that it will look more like the iPad which is a pretty good business.

Indeed, he writes that maybe the most revealing thing about the Reality Pro launch so far is in what it says about just how much raw capital Apple has to expend on experimenting with such devices. He notes that Apple had $280 billion in free cash flow over just the past three years to play with, helping to power the silicon and manufacturing mastery that made the Reality Pro possible and which will pose a formidable challenge to the likes of Meta in their new competition. Derek Robertson

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from American Edge Project:

Three Pillars For A Free And Open Internet

We need to craft a policy agenda rooted in three pillars: combatting digital authoritarianism, promoting free speech within and across borders, and building a stronger internet to connect people to each other and to their governments.

Learn more.

Originally posted here:

AI meets the other AI - POLITICO - POLITICO

Posted in Superintelligence | Comments Off on AI meets the other AI – POLITICO – POLITICO

Page 3«..2345..1020..»