{"id":1120589,"date":"2024-01-02T05:47:58","date_gmt":"2024-01-02T10:47:58","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/what-is-artificial-intelligence-ai-council-on-foreign-relations\/"},"modified":"2024-01-02T05:47:58","modified_gmt":"2024-01-02T10:47:58","slug":"what-is-artificial-intelligence-ai-council-on-foreign-relations","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/what-is-artificial-intelligence-ai-council-on-foreign-relations\/","title":{"rendered":"What Is Artificial Intelligence (AI)? &#8211; Council on Foreign Relations"},"content":{"rendered":"<p><p>Introduction    <\/p>\n<p>    Artificial intelligence (AI) has been around for decades, but    new advancements have brought the technology to the fore.    Experts say its rise could mirror previous technological    revolutions, adding billions of dollars worth of productivity    to the global economy while introducing a slew of new risks    that could upend the global geopolitical order and the nature    of society itself.  <\/p>\n<p>      More From Our Experts    <\/p>\n<p>    Managing these risks will be essential, and a global debate    over AI governance is raging as major powers such as the United    States, China, and European Union (EU) take increasingly    divergent approaches toward regulating the technology.    Meanwhile, AIs development and deployment continues to proceed    at an exponential rate.  <\/p>\n<p>      More on:    <\/p>\n<p>      Robots and Artificial Intelligence    <\/p>\n<p>      United States    <\/p>\n<p>      Technology and Innovation    <\/p>\n<p>      Defense Technology    <\/p>\n<p>    While there is no single definition, artificial intelligence    generally refers to the ability of computers to perform tasks    traditionally associated with human capabilities. The terms    origins trace back to the 1950s, when Stanford University    computer scientist John McCarthy used the term artificial    intelligence to describe the science and engineering of    making intelligent machines. For McCarthy, the standard for    intelligence was the ability to solve problems in a constantly    changing environment.  <\/p>\n<p>                  A curation of original analyses, data                  visualizations, and commentaries, examining the                  debates and efforts to improve health                  worldwide.Weekly.                <\/p>\n<p>    Since 2022, the public availability of so-called generative AI    tools, such as the chatbot ChatGPT, has raised the technologys    profile. Generative AI models draw from massive amounts of    training data to generate statistically probable outcomes in    response to specific prompts. Tools powered by such models    generate humanlike text, images, audio, and other    content.  <\/p>\n<p>    Another commonly referenced form of AI, known as artificial    general intelligence (AGI), or strong AI, refers to systems    that would learn and apply knowledge like humans do. However,    these systems do not yet exist and experts disagree on what    exactly they would entail.  <\/p>\n<p>      More From Our Experts    <\/p>\n<p>    Researchers have been studying AI for eighty years, with    mathematicians Alan Turing and John von Neumann considered to    be among the disciplines founding fathers. In the decades    since they taught    rudimentary computers binary code, software companies have    used AI to power tools such as chess-playing computers and    online language translators.  <\/p>\n<p>    In the countries that invest the most in AI, development has    historically relied on public funding. In China, AI research is    predominantly funded by the government, while the United States    for decades drew on research by the    Defense Advanced Research Projects Agency (DARPA) and other    federal agencies. In recent years, U.S. AI development has    largely shifted to the private sector, which has poured    hundreds of billions of dollars into the effort.  <\/p>\n<p>      More on:    <\/p>\n<p>      Robots and Artificial Intelligence    <\/p>\n<p>      United States    <\/p>\n<p>      Technology and Innovation    <\/p>\n<p>      Defense Technology    <\/p>\n<p>    In 2022, U.S. President Joe Biden signed the CHIPS and Science    Act, which refocuses U.S. government spending on technology    research and development. The legislation directs $280 billion    in federal spending toward semiconductors, the advanced    hardware capable of supporting the massive processing and    data-storage capabilities that AI requires. In January 2023,    ChatGPT became the fastest-growing consumer application of all    time.  <\/p>\n<p>    The arrival of AI marks a Big Bang moment, the beginning of a    world-changing technological revolution that will remake    politics, economies, and societies, Eurasia Group President    Ian Bremmer and Inflection AI CEO Mustafa Suleyman write for    Foreign Affairs.  <\/p>\n<p>    Companies and organizations across the world are already    implementing AI tools into their offerings. Driverless-car    manufacturers such as Tesla have been using AI for years, as    have investment banks that rely on algorithmic models to    conduct some trading operations, and technology companies that    use algorithms to deliver targeted advertising. But after the    arrival of ChatGPT, even businesses that are less    technology-oriented began turning to generative AI tools to    automate systems such as those for customer service. One-third    of firms around the world that were surveyed by consultancy McKinsey in April 2023    claimed to be using AI in some capacity.  <\/p>\n<p>    Widespread adoption of AI could speed up technological    innovation across the board. Already, the semiconductor    industry has boomed; Nvidia, the U.S.-based company that makes    the majority of all AI chips, saw its stock more than triple in    2023to a total valuation of more than $1 trillionamid    skyrocketing global demand for semiconductors.  <\/p>\n<p>    Many experts foresee a massive boon to the global economy as    the AI industry grows, with global gross domestic product (GDP)    predicted to    increase by an additional $7 trillion annually within the    next decade. Economies that refuse to adopt AI are going to be    left behind, CFR expert Sebastian Mallaby said on an episode of    the Why It Matters podcast. Everything from    strategies to contain climate change, to medical challenges, to    making something like nuclear fusion work, almost any cognitive    challenge you can think of is going to become more soluble    thanks to artificial intelligence.  <\/p>\n<p>    Like many other large-scale technological changes in history,    AI could breed a trade-off between increased productivity and    job loss. But unlike previous breakthroughs, which    predominantly eliminated lower-skill jobs, generative AI could    put white-collar jobs at riskand perhaps supplant jobs across    many industries more quickly than ever before. One quarter of    jobs around the world are at a high risk of being replaced by    AI automation, according to the Organization for Economic    Cooperation and Development (OECD). These jobs tend to rely on tasks    that generative AI could perform at a similar level of quality    as a human worker, such as information gathering and data    analysis, a Pew Research Center study found. Workers with    high-exposure to replacement by AI include accountants, web    developers, marketing professionals, and technical writers.  <\/p>\n<p>    The rise of generative AI has also raised concerns over    inequality, as the most high-skilled jobs appear to be the    safest from disruptions related to the technology, according to    OECD. But other analysis suggests that low-skilled workers    could benefit by drawing on AI tools to boost productivity: a    2023 study by researchers at the    Massachusetts Institute of Technology (MIT) and Stanford    University found that less-experienced call center operators    doubled the productivity gains of their more-experienced    colleagues after both groups began using AI.  <\/p>\n<p>    AIs relationship with the environment heralds both peril and    promise. While some experts argue that generative AI could    catalyze breakthroughs in the fight against climate change,    others have raised alarms about the technologys massive carbon    footprint. Its enormous processing power requires    energy-intensive data centers; these systems already produce    greenhouse gas emissions equivalent to those from the aviation    industry, and AIs energy consumption is only expected to    rise with future advancements.  <\/p>\n<p>    AI advocates contend that developers can use renewable energy    to mitigate some of these emissions. Tech firms including    Apple, Google, and Meta run their data centers using    self-produced renewable energy, and they also buy    so-called carbon credits to offset emissions from any    energy use that relies on fossil fuels.  <\/p>\n<p>    There are also hopes that AI can help reduce emissions in other    industries by enhancing research on renewables and using    advanced data analysis to optimize energy efficiency. In    addition, AI can improve climate adaptation measures.    Scientists in Mozambique, for example, are using the technology    to better predict flooding patterns, bolstering early warning    systems for impending disasters.  <\/p>\n<p>    Many experts have framed AI development as a struggle for    technological primacy between the United States and China. The    winner of that competition, they say, will gain both economic    and geopolitical advantage. So far, U.S. policymakers seem to    have operated with this framework in mind. In 2022, Biden    banned exports of    the most powerful semiconductors to China and encouraged U.S.    allies to do the same, citing national security concerns. One    year later, Biden proposed an    outright ban on several streams of U.S. investment into    Chinas AI sector, and the Department of Commerce announced a    raft of new restrictions aimed at curbing Chinese    breakthroughs in artificial intelligence. Most experts    believe the United States has outpaced China in AI development    to date, but that China will quickly close the gap.  <\/p>\n<p>    AI could also have a more direct impact on U.S. national    security: the Department of Defense expects the technology to    transform the very character of war by empowering autonomous weapons and improving    strategic analysis. (Some experts have pushed for a ban on    autonomous lethal weapons.) In Ukraines war against Russia,    Kyiv is deploying autonomously operated AI-powered drones,    marking the first time a major conflict has involved such    technology. Warring parties could also soon rely on AI systems    to accelerate battlefield decisions or to automatically    attack enemy infrastructure. Some experts fear these    capabilities could raise the    possibility of nuclear weapons use.  <\/p>\n<p>    Furthermore, AI could heighten the twin threats of    disinformation and propaganda, issues that are gaining    particular relevance as the world approaches a year in which    more people    are set to vote than ever before: more than seventy    countries, representing half the global population, will hold    national elections in 2024. Generative AI tools are making    deep fakes    easier to create, and the technology is already appearing    in electoral campaigns across the globe. Experts also cite the    possibility that bad actors could use AI to create    sophisticated phishing attempts that are tailored to a targets    interests to gain access to election systems. (Historically,    phishing has been a way into these systems for would-be    election hackers; Russia used the method to interfere in the    2016 U.S. election, according to the Department of    Justice.)  <\/p>\n<p>    Together, these risks could lead to a nihilism about the    existence of objective truth that threatens democracy,    said Jessica    Brandt, policy director for the Brookings Institutions    Artificial Intelligence and Emerging Technology Initiative, on    the podcast The Presidents Inbox.  <\/p>\n<p>    Some experts say that its not yet accurate to call AI    intelligent, as it doesnt involve human-level reasoning.    They argue that it doesnt create new knowledge, but instead    aggregates existing information and presents it in a digestible    way.  <\/p>\n<p>    But that could change. OpenAI, the company behind ChatGPT, was    founded as a nonprofit dedicated to ensuring that AGI benefits    humanity as a whole, and its cofounder, Sam Altman, has argued that it is not    possible or desirable to stop the development of AGI; in    2023, Google DeepMind CEO Demis Hassabis said AGI could arrive    within five years. Some experts, including CFR Senior Fellow    Sebastian Mallaby, contend that AI has already surpassed    human-level intelligence on some tasks. In 2020, DeepMind used    AI to solve protein    folding, widely considered until then to be one of the most    complex, unresolved biological mysteries.  <\/p>\n<p>    Many AI experts seem to think so. In May 2023, hundreds of AI    leaders, including the CEOs of Anthropic, Google DeepMind, and    OpenAI, signed a one-sentence letter that read, Mitigating the    risk of extinction from AI should be a global priority    alongside other societal-scale risks such as pandemics and    nuclear war.  <\/p>\n<p>    One popular theory for    how extinction could happen posits that a directive to optimize    a certain task could lead a super-intelligent AI to accomplish    its goal by diverting resources away from something humans need    to live. For example, an AI tasked with reducing the amount of    harmful algae in the oceans could suck oxygen out of the    atmosphere, leading humans to asphyxiate. While many AI    researchers see this theory as alarmist, others say the example    accurately illustrates the risk that powerful AI systems could    cause vast, unintentional harm in the course of carrying out    their directives.  <\/p>\n<p>    Skeptics of this debate argue that focusing on such far-off    existential risks obfuscates more immediate threats, such as    authoritarian surveillance or biased data sets. Governments and    companies around the world are expanding facial-recognition    technology, and some analysts worry that Beijing in particular    is using AI to supercharge repression. Another risk    occurs when AI training data contains elements that are over-    or underrepresented; tools trained on such data can produce    skewed outcomes. This can exacerbate discrimination against    marginalized groups, such as when AI-powered tenant-screening    algorithms trained on biased data disproportionately deny housing to people    of color. Generative AI tools can also facilitate chaotic    public discoursehallucinating false information that    chatbots present as true, or polluting search engines with    dubious AI-generated results.  <\/p>\n<p>    Almost all policymakers, civil society leaders, academics,    independent experts, and industry leaders agree that AI should    be governed, but they are not on the same page about how.    Internationally, governments are taking different approaches.  <\/p>\n<p>    The United States escalated its focus on governing AI in 2023.    The Biden administration followed up its 2022 AI Bill of Rights by    announcing a pledge from fifteen leading technology companies    to voluntarily adopt shared standards    [PDF] for AI safety, including by offering their frontier    models for government review. In October 2023, Biden issued an    expansive executive order    aimed at producing a unified framework for safe AI use across    the executive branch. And one month later, a bipartisan group    of senators proposed legislation to govern the technology.  <\/p>\n<p>    EU lawmakers are moving ahead    with legislation that will introduce transparency    requirements and restrict AI use for surveillance purposes.    However, some EU leaders have expressed concerns that the law    could hinder European innovation, raising questions    of how it will be enforced. Meanwhile, in China, the ruling    Chinese Communist Party has rolled out regulations that include    antidiscrimination requirements as well as the mandate that AI    reflect Socialist core values.  <\/p>\n<p>    Some governments have sought to collaborate on regulating AI at    the international level. At the Group of Seven (G7) summit in    May 2023, the bloc launched the so-called Hiroshima Process    to develop a common standard on AI governance. In October 2023,    the United Nations formed an AI Advisory Boardwhich    includes both U.S. and Chinese representativesto coordinate    global AI governance. The following month, twenty-eight    governments attended the first ever AI Safety Summit, held in    the United Kingdom. Delegates, including envoys from the United    States and China, signed a joint declaration    warning of AIs potential to cause catastrophic harm and    resolving to work together to ensure human-centric,    trustworthy and responsible AI. China has also announced its    own AI global    governance effort for countries in its Belt and Road    Initiative.  <\/p>\n<p>    AIs complexity makes it unlikely that the technology could be    governed by any one set of principles, CFR Senior Fellow Kat    Duffy says. Proposals run the gamut of policy options with many    levels of potential oversight, from total self-regulation to    various types of public-policy guardrails.  <\/p>\n<p>    Some analysts acknowledge that AIs risks have destabilizing    consequences but argue that the technologys development should    proceed. They say that regulators should place limits on    compute, or computing power, which has increased by five    billion times over the past decade, allowing models to    incorporate more of their training data in response to human    prompts. Others say governance should focus on immediate    concerns such as improving the publics AI literacy and    creating ethical AI systems that would include protections    against discrimination, misinformation, and surveillance.  <\/p>\n<p>    Some experts have called for limits on open-source models,    which can increase access to the technology, including for bad    actors. Many national security experts and leading AI companies    are in favor of such rules. However, some observers warn that    extensive restrictions could reduce competition and innovation    by allowing the largest AI companies to entrench their power    within a costly industry. Meanwhile, there are proposals for a global framework for governing    AIs military uses; one such approach would be modeled after    the International Atomic Energy Agency, which governs nuclear    technology.  <\/p>\n<p>    The U.S.-China relationship looms large over AI governance: as    Beijing pursues a national strategy    aimed at making China the global leader in AI theories,    technologies, and applications by 2030, policymakers in    Washington are struggling with how to place guardrails around    AI development without undermining the United States    technological edge.  <\/p>\n<p>    Meanwhile, AI technology is rapidly advancing. Computing power    has doubled every 3.4    months since 2012, and AI scientists    expect models to contain one hundred times more compute by    2025.  <\/p>\n<p>    In the absence of robust global governance, companies that    control AI development are now exercising power typically    reserved for nation-states, ushering in a technopolar world    order, Bremmer and Suleyman write. They argue that these    companies have become themselves geopolitical actors, and    thus they need to be involved in the design of any global    rules.  <\/p>\n<p>    AIs transformative potential means the stakes are high. We    have a chance to fix huge problems, Mallaby says. With proper    safeguards in place, he says, AI systems can catalyze    scientific discoveries that cure deadly diseases, ward off the    worst effects of climate change, and inaugurate an era of    global economic prosperity. Im realistic that there are    significant risks, but Im hopeful that smart people of    goodwill can help to manage them.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.cfr.org\/backgrounder\/what-artificial-intelligence-ai\" title=\"What Is Artificial Intelligence (AI)? - Council on Foreign Relations\">What Is Artificial Intelligence (AI)? - Council on Foreign Relations<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Introduction Artificial intelligence (AI) has been around for decades, but new advancements have brought the technology to the fore. Experts say its rise could mirror previous technological revolutions, adding billions of dollars worth of productivity to the global economy while introducing a slew of new risks that could upend the global geopolitical order and the nature of society itself <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/what-is-artificial-intelligence-ai-council-on-foreign-relations\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1120589","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120589"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1120589"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1120589\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1120589"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1120589"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1120589"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}