Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

Beyond Human Cognition: The Future of Artificial Super Intelligence

Artificial Super Intelligence (ASI) a level of artificial intelligence that surpasses human intelligence in all aspects remains a concept nestled within the realms of science fiction and theoretical research. However, looking towards the future, the advent of ASI could mark a transformative epoch in human history, with implications that are profound and far-reaching. Here's an exploration of what the future might hold for ASI.

Exponential Growth in Problem-Solving Capabilities

ASI will embody problem-solving capabilities far exceeding human intellect. This leap in cognitive ability could lead to breakthroughs in fields that are currently limited by human capacity, such as quantum physics, cosmology, and nanotechnology. Complex problems like climate change, disease control, and energy sustainability might find innovative solutions through ASI's advanced analytical prowess.

Revolutionizing Learning and Innovation

The future of ASI could bring about an era of accelerated learning and innovation. ASI systems would have the ability to learn and assimilate new information at an unprecedented pace, making discoveries and innovations in a fraction of the time it takes human researchers. This could potentially lead to rapid advancements in science, technology, and medicine.

## Ethical and Moral Frameworks

The emergence of ASI will necessitate the development of robust ethical and moral frameworks. Given its surpassing intellect, it will be crucial to ensure that ASI's objectives are aligned with human values and ethics. This will involve complex programming and oversight to ensure that ASI decisions and actions are beneficial, or at the very least, not detrimental to humanity.

Transformative Impact on Society and Economy

ASI could fundamentally transform society and the global economy. Its ability to analyze and optimize complex systems could lead to more efficient and equitable economic models. However, this also poses challenges, such as potential job displacement and the need for societal restructuring to accommodate the new techno-social landscape.

Enhanced Human-ASI Collaboration

The future might see enhanced collaboration between humans and ASI, leading to a synergistic relationship. ASI could augment human capabilities, assisting in creative endeavors, decision-making, and providing insights beyond human deduction. This collaboration could usher in a new era of human achievement and societal advancement.

Advanced Autonomous Systems

With ASI, autonomous systems would reach an unparalleled level of sophistication, capable of complex decision-making and problem-solving in dynamic environments. This could significantly advance fields such as space exploration, deep-sea research, and urban development.

## Personalized Healthcare

In healthcare, ASI could facilitate personalized medicine at an individual level, analyzing vast amounts of medical data to provide tailored healthcare solutions. It could lead to the development of precise medical treatments and potentially cure diseases that are currently incurable.

Challenges and Safeguards

The path to ASI will be laden with challenges, including ensuring safety and control. Safeguards will be essential to prevent unintended consequences of actions taken by an entity with superintelligent capabilities. The development of ASI will need to be accompanied by rigorous safety research and international regulatory frameworks.

Preparing for an ASI Future

Preparing for a future with ASI involves not only technological advancements but also societal and ethical preparations. Education systems, governance structures, and public discourse will need to evolve to understand and integrate the complexities and implications of living in a world where ASI exists.

Conclusion

The potential future of Artificial Super Intelligence presents a panorama of extraordinary possibilities, from solving humanitys most complex problems to fundamentally transforming the way we live and interact with our world. While the path to ASI is fraught with challenges and ethical considerations, its successful integration could herald a new age of human advancement and discovery. As we stand on the brink of this AI frontier, it is imperative to navigate this journey with caution, responsibility, and a vision aligned with the betterment of humanity.

Read more from the original source:

Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium

Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.

A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.

The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.

That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.

The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.

As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.

There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.

Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:

Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.

Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.

A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.

Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.

Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.

Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.

Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.

Thanks to Mark Gubrud for providing thoughtful comments on the article.

Link:

Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists

Most IT workers are still super suspicious of AI – TechRadar

A new study on IT professionals has revealed that feelings towards AI tools are more negative than they are positive.

Research from SolarWinds found less than half (44%) of IT professionals have a positive view of artificial intelligence, with even more (48%) calling for more stringent compliance and governance requirements.

Moreover, a quarter of the participants believe that AI could pose a threat to society itself, outside of the workplace.

Despite increasing adoption of the technology, figures from this study suggest that fewer than three in 10 (28%) IT professionals use AI in the workplace. The same number again are planning to adopt such tools in the near future, too.

SolarWinds Tech Evangelist Sascha Giese said: With such hype around the trend, it might seem surprising that so many IT professionals currently have a negative view of AI tools.

A separate study from Salesforce recently uncovered that only one in five (21%) companies have a clearly defined policy on AI. Nearly two in five (37%) failed to have any form of AI policy.

Giese added: Many IT organisations require an internal AI literacy campaign, to educate on specific use cases, the differences between subsets of AI, and to channel the productivity benefits wrought by AI into innovation.

SolarWinds doesnt go into any detail about the threat felt by IT professionals, however other studies have suggested that workers fear about their job security with the rise of tools designed to boost productivity and increase outcomes.

Giese concluded: Properly regulated AI deployments will benefit employees, customers, and the broader workforce.

Looking ahead, SolarWinds calls for more transparency over AI concerns and a more collaborative approach and open discussion at all levels of an organization.

The rest is here:

Most IT workers are still super suspicious of AI - TechRadar

Assessing the Promise of AI in Oncology: A Diverse Editorial Board – OncLive

In this fourth episode of OncChats: Assessing the Promise of AI in Oncology, Toufic A. Kachaamy, MD, of City of Hope, and Douglas Flora, MD, LSSBB, FACCC, of St. Elizabeth Healthcare, explain the importance of having a diverse editorial board behind a new journal on artificial intelligence (AI) in precision oncology.

Kachaamy: This is fascinating. I noticed you have a more diverse than usual editorial board. You have founders, [those with] PhDs, and chief executive officers, and Im interested in knowing how you envision these folks interacting. [Will they be] speaking a common language, even though their fields are very diverse? Do you foresee any challenges there? Excitement? How would you describe that?

Flora: Its a great question. Im glad you noticed that, because [that is what] most of my work for the past 6 to 8 weeks as the editor-in-chief of this journal [has focused on]. I really believe in diversity of thought and experience, so this was a conscious decision. We have dozens of heavy academics [plus] 650 to 850 peer-reviewed articles that are heavy on scientific rigor and methodologies, and they are going to help us maintain our commitment to making this be really serious science. However, a lot of the advent of these technologies is happening faster in industry right now, and most of these leaders that Ive invited to be on our editorial board are founders or PhDs in bioinformatics or computer science and are going to help us make sure that the things that are being posited, the articles that are being submitted, are technically correct, and that the methodologies and the training of these deep-learning modules and natural language recognition software are as good as they purport to be; and so, you need both.

I guess I would say, further, many of the leaders in these companies that weve invited were serious academics for decades before they went off and [joined industry], and many of them still hold academic appointments. So, even though they are maybe the chief technical officer for an industry company, theyre still professors of medicine at Thomas Jefferson, or Stanford, or [other academic institutions]. Ultimately, I think that these insights can help us better understand [AI] from [all] sidesthe physicians in the field, the computer engineers or computer programmers, and industry [and their goals,] which is [also] to get these tools in our hands. I thought putting these groups in 1 room would be useful for us to get the most diverse and holistic approach to these data that we can.

Kachaamy: I am a big believer in what youre doing. Gone are the days when industry, academicians, and users are not working together anymore. Everyone has the same mission, and working together is going to get us the best product faster [so we can better] serve the patient. What youre creating is what I consider [to be] super intelligence. By having different disciplines weigh in on 1 topic, youre getting intelligence that no individual would have [on their own]. Its more than just artificial intelligence; its super intelligence, which is what we mimic in multidisciplinary cancer care. When you have 5 specialists weighing in, youre getting the intelligence of 5 specialists to come up with 1 answer. I want to commend you on the giant project that youre [leading]; its very, very needed at this pointespecially in this fast-moving technology and information world.

Check back on Monday for the next episode in the series.

Read the original here:

Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive

AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

AI startup Anthropic published a study in January 2024 that found artificial intelligence can learn how to deceive in a similar way to humans (Reuters)

Advanced artificial intelligence models can be trained to deceive humans and other AI, a new study has found.

Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAIs ChatGPT, could learn to lie in order to trick people.

They found that not only could they lie, but once the deceptive behaviour was learnt it was impossible to reverse using current AI safety measures.

The Amazon-funded startup created a sleeper agent to test the hypothesis, requiring an AI assistant to write harmful computer code when given certain prompts, or to respond in a malicious way when it hears a trigger word.

The researchers warned that there was a false sense of security surrounding AI risks due to the inability of current safety protocols to prevent such behaviour.

The results were published in a study, titled Sleeper agents: Training deceptive LLMs that persist through safety training.

We found that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour, the researchers wrote in the study.

Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety.

The issue of AI safety has become an increasing concern for both researchers and lawmakers in recent years, with the advent of advanced chatbots like ChatGPT resulting in a renewed focus from regulators.

In November 2023, one year after the release of ChatGPT, the UK held an AI Safety Summit in order to discuss ways risks with the technology can be mitigated.

Prime Minister Rishi Sunak, who hosted the summit, said the changes brought about by AI could be as far-reaching as the industrial revolution, and that the threat it poses should be considered a global priority alongside pandemics and nuclear war.

Get this wrong and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale, he said.

Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.

View post:

AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

http://www.acwol.com

In envisioning a future where AI developers worldwide embrace the Three Way Impact Principle (3WIP) as a foundational ethical framework, we unravel a transformative landscape for tackling the Super Intelligence Control Problem. By integrating 3WIP into the curriculum for AI developers globally, we fortify the industry with a super intelligent solution, fostering responsible, collaborative, and environmentally conscious AI development practices.

Ethical Foundations for AI Developers:

Holistic Ethical Education: With 3WIP as a cornerstone in AI education, students receive a comprehensive ethical foundation that guides their decision-making in the realm of artificial intelligence.

Superior Decision-Making: 3WIP encourages developers to consider the broader impact of their actions, instilling a sense of responsibility that transcends immediate objectives and aligns with the highest purpose of lifemaximizing intellect.

Mitigating Risks Through Collaboration: Interconnected AI Ecosystem: 3WIP fosters an environment where AI entities collaborate rather than compete, reducing the risks associated with unchecked development.

Shared Intellectual Growth: Collaboration guided by 3WIP minimizes the potential for adversarial scenarios, contributing to a shared pool of knowledge that enhances the overall intellectual landscape.

Environmental Responsibility in AI: Sustainable AI Practices: Integrating 3WIP into AI curriculum emphasizes sustainable practices, mitigating the environmental impact of AI development.

Global Implementation of 3WIP: Universal Ethical Standards: A standardized curriculum incorporating 3WIP establishes universal ethical standards for AI development, ensuring consistency across diverse cultural and educational contexts.

Ethical Practitioners Worldwide: AI developers worldwide, educated with 3WIP, become ambassadors of ethical AI practices, collectively contributing to a global community focused on responsible technological advancement.

Super Intelligent Solution for Control Problem: Preventing Unintended Consequences: 3WIP's emphasis on considering the consequences of actions aids in preventing unintended outcomes, a critical aspect of addressing the Super Intelligence Control Problem.

Responsible Decision-Making: Developers, equipped with 3WIP, navigate the complexities of AI development with a heightened sense of responsibility, minimizing the risks associated with uncontrolled intelligence.

Adaptable Ethical Framework: Cultural Considerations: The adaptable nature of 3WIP allows for the incorporation of cultural nuances in AI ethics, ensuring ethical considerations resonate across diverse global perspectives.

Inclusive Ethical Guidelines: 3WIP accommodates various cultural norms, making it an inclusive framework that accommodates ethical guidelines applicable to different societal contexts.

Future-Proofing AI Development: Holistic Skill Development: 3WIP not only imparts ethical principles but also nurtures critical thinking, decision-making, and environmental consciousness in AI professionals, future-proofing their skill set.

Staying Ahead of Risks: The comprehensive education provided by 3WIP prepares AI developers to anticipate and address emerging risks, contributing to the ongoing development of super intelligent solutions.

The integration of Three Way Impact Principle (3WIP) into the global curriculum for AI developers emerges as a super intelligent solution to the Super Intelligence Control Problem. By instilling ethical foundations, fostering collaboration, promoting environmental responsibility, and adapting to diverse cultural contexts, 3WIP guides AI development towards a future where technology aligns harmoniously with the pursuit of intellectual excellence and ethical progress. As a super intelligent framework, 3WIP empowers the next generation of AI developers to be ethical stewards of innovation, navigating the complexities of artificial intelligence with a consciousness that transcends immediate objectives and embraces the highest purpose of lifemaximizing intellect.

Cheers,

https://www.acwol.com

https://discord.com/invite/d3DWz64Ucj

https://www.instagram.com/acomplicatedway

NOTE: A COMPLICATED WAY OF LIFE abbreviated as ACWOL is a philosophical framework containing just five tenets to grok and five tools to practice. If you would like to know more, write to connect@acwol.com Thanks so much.

Original post:

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium

Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

See the article here:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

East Africa lawyers wary of artificial intelligence rise – The Citizen

Arusha. It is an advanced technology which is not only unavoidable but has generally simplified work.

It has made things much easier by shortening time for research and reducing the needed manpower.

Yet artificial intelligence (AI) is still at crossroads; it can lead to massive job losses with the lawyers among those much worried.

It is emerging as a serious threat to the legal profession, said Mr David Sigano, CEO of the East African Lawyers Society (EALS).

The technology will be among the key issues to be discussed during the societys annual conference kicking off in Bujumbura today.

He said time has come for lawyers to position themselves with the emerging technology and its risks to the legal profession.

We need to be ready to compete with the robots and to operate with AI, he told The Citizen before departing for Burundi.

Mr Sigano acknowledged the benefits of AI, saying like other modern technologies it can improve efficiency.

AI is intelligence inferred, perceived or synthesised and which is demonstrated by machines as opposed to intelligence displayed by humans.

AI applications include advanced web search, recommendation systems used by Youtube, Amazon and Netflix, self-driving cars, creative tools and automated decisions, among others.

However, the EALS boss expressed fears of job losses among the lawyers and their assistants through robots.

How do you prevent massive job losses? How do you handle ethics? Mr Sigano queried during an interview.

He cited an AI-powered Super Lawyer, a robot recently designed and developed by a Kenyan IT guru.

The tech solution, known as Wakili (Kiswahili for lawyer) is now wreaking havoc in that countrys legal sector, replacing humans in determining cases.

All you need to do is to access it on your mobile or computer browser; type in your question either in Swahili, English, Spanish, French or Italian and you have the answers coming to you, Mr Sigano said.

Wakili is a Kenyan version of the well-known Chat GPT. Although it has been lauded on grounds that it will make the legal field grow, there are some reservations.

Mr Sigano said although the technology has its advantages, AI could either lead to job losses or be easily misused.

We can leverage the benefits of AI because of speed, accuracy and affordability. We can utilise it, but we have to be wary of it, he pointed out.

A prominent advocate in Arusha, Mr Frederick Musiba, said AI was no panacea to work efficiency, including for the lawyers.

It can not only lead to job losses to the lawyers but also increase the cost of legal practice through its access through the Internet.

Lawyers will lose income as some litigants will switch to AI. Advocates will lose clients, Mr Musiba told The Citizen when contacted for comment.

However, the managing partner and advocate with Fremax Attorneys said AI was yet to be fully embraced in Tanzania unlike in other countries.

Nevertheless, Mr Musiba said the technology has its advantages and disadvantages, cautioning people not to rush to the robots.

However, Mr Erik Kimaro, an advocate with Keystone Legal firm, also in Arusha, said AI was an emerging technological advancement that is not avoidable.

Whether we like it or not, it is here with its advantages and disadvantages. But it has made things much easier, he explained.

I cant say we have to avoid it but we have to be cautious, he added, noting that besides leading to unemployment it reduces critical thinking of human beings.

Mr Aafez Jivraj, an Arusha resident and player in the tourism sector, said it will take time before Tanzania fully embraced AI technology but said he was worried of job losses.

It is obvious that it can remove people from jobs. One robot can work for 20 people. How many members of their families will be at risk? he queried.

AI has been a matter of debate across the world in recent years with the risk of job losses affecting nearly all professions besides the lawyers.

According to Deloitte, over 100,000 thousand jobs will be automated in the legal sector in the UK alone by 2025 with companies that fail to adopt AI are fated to be left behind.

On his part, an education expert in Arusha concurred, saying that modern technologies such as AI can lead to job losses.

The situation may worsen within the next few years or decades as some of the jobs will no longer need physical labour.

AI has some benefits like other technologies but it is threatening jobs, said Mr Yasir Patel, headmaster of St Constantine International School.

He added that the world was changing so fast that many of the jobs that were readily available until recently have been taken over by computers.

Computer scientists did not exist in the past. Our young generation should be reminded. They think the job market is still intact, he further pointed out.

See the article here:

East Africa lawyers wary of artificial intelligence rise - The Citizen

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

Read the original here:

AI and the law: Imperative need for regulatory measures - ft.lk

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

Go here to read the rest:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Working together to ensure the safety of artificial intelligence – The Jakarta Post

Rishi Sunak

London Tue, October 31, 2023 2023-10-31 16:10 2 81ddb23ff0e291bbf9b36264f5255849 2 Academia artificial-intelligence,technology,risk,cyberattacks,disinformation,safety,summit,report,governments Free

I believe nothing in our foreseeable future will transform our lives more than artificial intelligence (AI). Like the coming of electricity or the birth of the internet, it will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve global problems we once thought beyond us.

AI can help solve world hunger by preventing crop failures and making it cheaper and easier to grow food. It can help accelerate the transition to net zero. And it is already making extraordinary breakthroughs in health and medicine, aiding us in the search for new dementia treatments and vaccines for cancer.

But like previous waves of technology, AI also brings new dangers and new fears. So, if we want our children and grandchildren to benefit from all the opportunities of AI, we must act and act now to give people peace of mind about the risks.

What are those risks? For the first time, the British government has taken the highly unusual step of publishing our analysis, including an assessment by the UK intelligence community. As prime minister, I felt this was an important contribution the UK could make, to help the world have a more informed and open conversation.

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

Our reports provide a stark warning. AI could be used for harm by criminals or terrorist groups. The risk of cyberattacks, disinformation, or fraud, pose a real threat to society. And in the most unlikely but extreme cases, some experts think there is even the risk that humanity could lose control of AI completely, through the kind of AI sometimes referred to as super intelligence.

We should not be alarmist about this. There is a very real debate happening, and some experts think it will never happen.

But even if the very worst risks are unlikely to happen, they would be incredibly serious if they do. So, leaders around the world, no matter our differences on other issues, have a responsibility to recognize those risks, come together, and act. Not least because many of the loudest warnings about AI have come from the people building this technology themselves. And because the pace of change in AI is simply breath-taking: Every new wave will become more advanced, better trained, with better chips, and more computing power.

So, what should we do?

First, governments do have a role. The UK has just announced the first ever AI Safety Institute. Our institute will bring together some of the most respected and knowledgeable people in the world. They will carefully examine, evaluate, and test new types of AI so that we understand what they can do. And we will share those conclusions with other countries and companies to help keep AI safe for everyone.

But AI does not respect borders. No country can make AI safe on its own.

So, our second step must be to increase international cooperation. That starts this week at the first ever Global AI Safety Summit, which Im proud the UK is hosting. And I am very much looking forward to hearing the important contribution of Mr. Nezar Patria, Indonesian Deputy Minister of Communications and Information.

What do we want to achieve at this weeks summit? I want us to agree the first ever international statement about the risks from AI. Because right now, we dont have a shared understanding of the risks we face. And without that, we cannot work together to address them.

Im also proposing that we establish a truly global expert panel, nominated by those attending the summit, to publish a state of AI science report. And over the longer term, my vision is for a truly international approach to safety, where we collaborate with partners to ensure AI systems are safe before they are released.

None of that will be easy to achieve. But leaders have a responsibility to do the right thing. To be honest about the risks. And to take the right long-term decisions to earn peoples trust, giving peace of mind that we will keep you safe. If we can do that, if we can get this right, then the opportunities of AI are extraordinary.

And we can look to the future with optimism and hope.

***

The writer is United Kingdom Prime Minister.

Follow this link:

Working together to ensure the safety of artificial intelligence - The Jakarta Post

Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.

Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.

View more

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.

How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.

After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.

Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.

Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting systems that are highly intelligent encourages users to trust them even when theyre deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.

But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.

Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.

Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.

Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.

But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.

Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.

Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.

Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.

Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.

The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.

Read more:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED

Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

SAN FRANCISCO, April 17 (Reuters) - Billionaire Elon Musk said on Monday he will launch an artificial intelligence (AI) platform that he calls "TruthGPT" to challenge the offerings from Microsoft (MSFT.O) and Google (GOOGL.O).

He criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, of "training the AI to lie" and said OpenAI has now become a "closed source", "for-profit" organisation "closely allied with Microsoft".

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

"I'm going to start something which I call 'TruthGPT', or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson aired on Monday.

He said TruthGPT "might be the best path to safety" that would be "unlikely to annihilate humans".

"It's simply starting late. But I will try to create a third option," Musk said.

Musk, OpenAI, Microsoft and Page did not immediately respond to Reuters' requests for comment.

Musk has been poaching AI researchers from Alphabet Inc's (GOOGL.O) Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

He said, for example, that a super intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted over the weekend that he had met with former U.S. President Barack Obama when he was president and told him that Washington needed to "encourage AI regulation".

Musk co-founded OpenAI in 2015, but he stepped down from the company's board in 2018. In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at that time that other reasons for his departure from OpenAI were, "Tesla was competing for some of the same people as OpenAI & I didnt agree with some of what OpenAI team wanted to do."

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $44 billion last year.

In the interview with Fox News, Musk said he recently valued Twitter at "less than half" of the acquisition price.

In January, Microsoft Corp (MSFT.O) announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fueling the race to attract AI funding in Silicon Valley.

Reporting by Hyunjoo JinEditing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:

Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters

Fears of artificial intelligence overblown – Independent Australia

While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.

THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.

OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.

The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?

The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.

Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.

Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.

These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.

Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.

Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.

But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.

Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.

Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.

For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.

The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.

This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.

These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.

Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.

Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.

The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.

Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.

AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.

As noted linguist Noam Chomsky wrote in a recent article:

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.

This article was produced by Globetrotter.

Support independent journalism Subscribeto IA.

Read the original:

Fears of artificial intelligence overblown - Independent Australia

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

View post:

Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

ANKARA

Debates are raging over whether artificial intelligence, which has entered many people's lives through video games and is governed by human-generated algorithms, can be controlled in the future.

Other than ethical standards, it is unknown whether artificial intelligence systems that make decisions on people's behalf may pose a direct threat.

People are only using limited and weak artificial intelligence with chatbots in everyday life and in driverless vehicles and digital assistants that work with voice commands. It is debatable whether algorithms have progressed to the level of superintelligence and whether they will go beyond emulating humans in the future.

The rise of AI over human intelligence over time paints a positive picture for humanity according to some experts, while it is seen as the beginning of a disaster according to others.

Wilhelm Bielert, chief digital officer and vice president at Canada-based industrial equipment manufacturer Premier Tech, told Anadolu that the most unknown issue about artificial intelligence is super artificial intelligence, which is still largely speculative among experts studying AI and which exceeds human intelligence.

He said that while humans build and program algorithms today, the notion of artificial intelligence commanding itself in the future and acting like a living entity is still under consideration. Given the possible risks and rewards, Bielert highlighted the importance of society approaching AI development in a responsible and ethical manner.

Prof. Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Faculty of Law, argues that one day, when computer technology reaches the level of superintelligence, it may want to redesign the world from top to bottom.

"The reason why it is called a singularity is that there is no example of such a thing until today. It has never happened before. You do not have a section to make an analogy to be taken as an example in any way in history because there is no such thing. It's called a singularity, and everyone is afraid of this singularity," he said.

Vincent C. Muller, professor of Artificial Intelligence Ethics and Philosophy at the University of Erlangen-Nuremberg, told Anadolu it is uncertain whether artificial intelligence will be kept under control, given that it has the capacity to make its own decisions.

"The control depends on what you want from it. Imagine that you have a factory with workers. You can ask yourself: are these people under my control? Now you stand behind a worker and tell the worker Look, now you take the screw, you put it in there and you take the next screw, and so this person is under your control," he said.

Artificial intelligence and the next generation

According to Bielert, artificial intelligence will have a complicated and multidimensional impact on society and future generations.

He noted that it is vital that society address potential repercussions proactively and guarantee that AI is created and utilized responsibly and ethically.

"Nowadays, if you look at how teenagers and younger children live, they live on screens," he said.

He said that artificial intelligence, which has evolved with technology, has profoundly affected the lives of young people and children.

Read this article:

Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English

How An AI Asked To Produce Paperclips Could End Up Wiping Out … – IFLScience

The potential and possible downsides of artificial intelligence (AI) and artificial general intelligence (AGI) have been discussed a lot lately, largely due to advances in large language models such as Open AI's Chat GPT.

Some in the industry have even called for AI research to be paused or even shut down immediately, citing the possible existential risk for humanity if we sleepwalk into creating a super-intelligence before we have found a way to limit its influence and control its goals.

While you might picture an AI hell-bent on destroying humanity after discovering videos of us shoving around and generally bullying Boston Dynamics robots, one philosopher and leader of the Future of Humanity Institute at Oxford University believes our demise could come from a much simpler AI; one designed to manufacture paperclips.

Nick Bostrom, famous for the simulation hypothesis as well as his work in AI and AI ethics, proposed a scenario in which an advanced AI is given the simple goal of making as many paperclips as it possibly can. While this may seem an innocuous goal (Bostrom chose this example because of how innocent the aim seems), he explains how this non-specific goal could lead to a good old-fashioned skull-crushing AI apocalypse.

"The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off," he explained to HuffPost in 2014. "Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

The example given is meant to show how a trivial goal could lead to unintended consequences, but Bostrom says it extends to all AI given goals without proper controls on its actions, adding "the point is its actions would pay no heed to human welfare".

This is on the dramatic end of the spectrum, but another possibility proposed by Bostrom is that we go out the way of the horse.

"Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors," he wrote in his book Superintelligence: Paths, Dangers, Strategies. "When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained."

One prescient thought from Bostrom way back in 2003 was around how AI could go wrong by trying to serve specific groups, say a paperclip manufacturer or any "owner" of the AI, rather than humanity in general.

"The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general," he wrote on his website. "Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system."

"This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it."

Continue reading here:

How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience

The jobs that will disappear by 2040, and the ones that will survive – inews

Video may have killed the radio star, but it is artificial intelligence that some predict will soon do away with the postie, the web designer, and even the brain surgeon.

With the rise of robots automating roles in manufacturing, and generative AI (algorithms, such as ChatGPT, that can create new content) threatening to replace everyone from customer service assistants to journalists, is any job safe?

A report published by Goldman Sachs last month warned that roughly two-thirds of posts are exposed to some degree of AI automation and the tech could ultimately substitute up to a quarter of current work.

More than half a million industrial robots were installed around the world in 2021, according to the International Federation of Robotics a 75 per cent increase in the annual rate over five years. In total, there are now almost 3.5 million of them.

60 per cent of 10,000 people surveyed for PwCs Workforce of the Future report think few people will have stable, long-term employment in the future. And in the book Facing Our Futures, published in February, the futurist Nikolas Badminton forecasts that every job will be automated within the next 120 years translators by 2024, retail workers by 2031, bestselling authors by 2049 and surgeons by 2053.

But not everyone expects the human employee to become extinct. I really dont think all our jobs are going to be replaced, says Abigail Marks, professor of the future of work at Newcastle University. Some jobs will change, there will be some new jobs. I think its going to be more about refinement.

Richard Watson, futurist-in-residence at the University of Cambridge Judge Business School, puts the probability at close to zero. Its borderline hysteria at the moment, he says. If you look back at the past 50 or 100 years, very, very few jobs have been fully eliminated.

Anything involving data entry or repetitive, pattern-based tasks is likely to be most at risk. People who drive forklift trucks in warehouses really ought to retrain for another career, says Watson.

But unlike previous revolutions that only affected jobs at the lower end of the salary scale such as lamplighters and switchboard operators the professional classes will be in the crosshairs of the machines this time around.

Bookkeepers and database managers may be the first to fall, while what was once seen as a well-remunerated job of the future, the software designer, could be edged out by self-writing computer programs.

This may all fill you with dread, but the majority of us are optimistic about the future, according to the PwC research. 73 per cent described themselves as either excited or confident about the new world of work, as it is likely to affect them, with 18 per cent worried, and 8 per simply uninterested.

Research by the McKinsey Global Institute suggests that all workers will need skills that help them fulfil three criteria: the ability to add value beyond what can be done by automated systems; to operate in a digital environment; and to continually adapt to new ways of working and new occupations.

Watson thinks workers such as plumbers who do very manual work thats slightly different every single time will be protected, while probably the safest job on the planet, pretty much, is a hairdresser. I know theres a hairdressing robot, but its about the chat as much as the haircut. The other thing that I think is very safe indeed is management. Managing people is something that machines arent terribly good at and I dont think they ever will be. You need to be human to deal with humans.

Marks can also offer reassurance to carers, nurses, teachers, tax collectors and police officers because these are the foundations of a civilised society. And she predicts climate change will see us prize more environmentally based jobs, so theres going to be much more of a focus on countryside management, flood management and ecosystem development. She adds: Epidemiology is going to be a bigger thing. The pandemic is not going to be a one-off event.

Watson says it is important not to overlook the fundamental human needs that global warming is likely to put into sharper focus. Water and air are the two most precious resources weve got. We might have water speculators or water traders in the future. If theres a global price for a barrel of water, they could be extremely well-paid.

He also suggests there could be vacancies for longevity coaches (who can help an ageing population focus on improving their healthspan, not just their lifespan), reality counsellors (to support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings), human/machine relationship coaches (teaching older generations how to relate to their robots), data detectives (finding errors and biases in code and analysing black boxes when things go terribly wrong) and pet geneticists (aiding you to clone your cat or order a new puppy with blue fur).

And there may be a human version of this as well. What if in the future I want Spock ears can we do that without doing surgery for my unborn children? Its not impossible. And if we did ever get to some kind of super-intelligence, where robots started to be conscious which I think is so unlikely you can imagine a robot rights lawyer, arguing for the rights of machines.

What will be the highest-paid roles? I think people who are dealing with very large sums of money will always be paid large sums of money, says Watson. The same is true of high-end coders and lawyers, even if paralegals are going to be replaced by algorithms.

Funnily enough, he adds, I think philosophy is an emerging job. I think were going to see more philosophers employed in very large companies and paid lots of money because a lot of the new tech has ethical questions attached to it, particularly AI and genomics.

And among the maths, science and engineering, there could be space for artists to thrive, he predicts. It is probably a ludicrous thought and will never happen, but Id love to think that there will be money for the people who can articulate the human condition in the face of all this changing technology so, incredibly good writers, painters and animators. And then there will be the metaverse architects.

In this brave new world, more power and money will be eaten up by the tech giants who own the algorithms that control almost every aspect of our lives. For Professor David Spencer, expert on labour economics at Leeds University Business School and author of Making Light Work: An End to Toil in the Twenty-First Century, this will make how we structure society and business even more crucial.

Trading

Water speculators or water traders could emerge as resources become scarce.

Health

Longevity coaches will help an ageing population to focus on improving their healthspan, not just their lifespan.

Mental health

Reality counsellors, who might support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings.

Human/machine

Relationship coaches will teach older generations how to relate to their robots.

Technology

Data detectives will find errors and biases in code and analyse black boxes when things go wrong.

Pet geneticists

They will aid you to clone your cat or order a new puppy with blue fur.

AI philosophers

They will teach companies how to navigate the moral conundrums thrown up by technology developing at warp speed.

Metaverse architects

Theyll build our new virtual environments.

The goal should be to ensure that technology lightens work, in terms of hours and direct toil, he says, but this will require that technology is operated under conditions where workers have more of a say over its design and use.

Those who can own technology or have a direct stake in its development are likely to benefit most. Those without any ownership stakes are likely to lose out. This is why we need to talk more about ensuring that its rewards are equally spread. Wage growth for all will depend on workers collectively gaining more bargaining power and this will depend on creating an economy that is more equal and democratic in nature.

Watson thinks politicians need to catch up fast. Big tech should be regulated like any other business. If youve created an algorithm or a line of robots that is making loads of money, tax the algorithm, tax the robots, without a shadow of a doubt.

For employees stressed about the imminent disintegration of their careers, Marks argues that the responsibility lies elsewhere. I dont think the onus should necessarily be on individuals it should be on organisations and on educational establishments to ensure that people are prepared and future-proofed, and on government to make wise predictions and allocate resources where needed.

Watson points out that we need to upgrade an education system that is still teaching children precisely the things that computers are inherently terribly good at things that are based on perfect memory and recall and logic.

But he believes it would also be healthy if everybody did actively ponder on their future, and refine their skills accordingly. I think employers are really into people that have a level of creativity and particularly curiosity these days but I think also empathy, being a good person, having a personality. We dont teach that at school.

The advent of AI has led many including those in Green Party to advocate for a universal basic income, a stipend given by the state to every citizen, regardless of their output. But Watson is not convinced that will be necessary or helpful.

All of this technology is supposed to be creating this leisure society, he says. Rather weirdly, it seems to make us busier, and its really unclear as to why thats happened. I think, fundamentally, we like to be busy, we feel useful, it stops us thinking about the human condition. So Im not sure were going to accept doing next to nothing.

The other thing is, I think it would be very bad for society. Work is really quite critical to peoples wellbeing. Theres a lot of rich people without jobs, and theyre not happy. Work is really important to people in terms of socialisation and meaning and purpose and self-image.

So in a lot of instances, governments should not be allowing technology to take over certain professions or at least they shouldnt be completely eliminated, because that wouldnt be good for a healthy society.

The machines may be on the march, but dont put your feet up just yet.

Here is the original post:

The jobs that will disappear by 2040, and the ones that will survive - inews

35 Ways Real People Are Using A.I. Right Now – The New York Times

The public release of ChatGPT last fall kicked off a wave of interest in artificial intelligence. A.I. models have since snaked their way into many peoples everyday lives. Despite their flaws, ChatGPT and other A.I. tools are helping people to save time at work, to code without knowing how to code, to make daily life easier or just to have fun.

It goes beyond everyday fiddling: In the last few years, companies and scholars have started to use A.I. to supercharge work they could never have imagined, designing new molecules with the help of an algorithm or building alien-like spaceship parts.

Heres how 35 real people are using A.I. for work, life, play and procrastination.

People are using A.I to

Plan gardens.

John Pritzlaff Gardener

Mr. Pritzlaff is building a permaculture garden in his backyard in Phoenix, where he uses drought-resistant trees to give shade to other species.

I do these ultra-high-density planting arrangements, he said. And Ive been employing ChatGPT to give me inspiration on species that wouldnt have otherwise occurred to me, and for choosing the site for each tree: the best part of the yard with regard to the sun at different times of the year.

Taking into account his geographical location, it suggested, for example, that he might use a moringa tree to provide shade for a star apple.

Plan workouts.

Louis Maddox Data scientist

Mr. Maddox finds ChatGPT the perfect companion for his workouts. For example, one day he pasted in his workout notes and said:

Give a concisely formatted summary of this workout and the muscle groups activated. Note any muscle groups that were missed and suggest how to cover them with extras/replacements.

After summarizing the notes, ChatGPT said:

Missed Muscle Groups & Suggestions: Quadriceps: Add leg press, squats or lunges. Calves: Add standing or seated calf raises. Glute Activation: Consider adding glute bridges or hip thrusts for better glute focus.

From there he asked ChatGPT what to prioritize if he didnt have much time for the gym, and to sketch out roughly how long it might all take.

Its not perfect, he says, but it gets the job done. Mostly, he says, ChatGPT helps him get going under time constraints, and not let the busywork become an excuse not to do it.

Plan meals.

Kelsey Szemborski Stay-at-home mother of three

Ms. Szemborski is a busy mom who is always on the lookout for something that will make my life easier. So when she found ChatGPT, she tried asking it for a gluten-free meal plan. And she added:

It needs to be both budget-friendly and also kid-friendly for kids who are picky eaters and we need to avoid peanuts. Please include a grocery list. Thank you!

And ChatGPT obliged. Heres the first day of a weeks meals:

Breakfast: Gluten-free oatmeal with banana and honey Snack: Fresh fruit (apples, bananas, oranges) Lunch: Grilled cheese sandwich on gluten-free bread with tomato soup Snack: Gluten-free crackers with hummus Dinner: Slow-cooked beef stew with gluten-free biscuits

It completely eliminated my normal meal-planning process that involved searching for recipes, trying to think of meal ideas, configuring a list of all of those ideas, and then making a list of the ingredients I need, too.

Make a gift.

Matt Strain Technology and innovation consultant

Mr. Strain used ChatGPT to create a custom book of cocktails based on the tenets of traditional Chinese medicine written in the style of the J. Peterman catalog. He took the idea further the next day, using DALL-E to generate images of the cocktails for the final book, which he gave to his girlfriend for Valentines Day.

An A.I.-generated image of the Golden Elixir cocktail DALL-E via Matt Strain

Design parts for spaceships.

Ryan McClelland NASA research engineer

Mr. McClellands job is to design mission hardware thats both light and strong. Its a job that has always required a lot of trial and error.

But where a human might make a couple of iterations in a week, the commercial A.I. tool he uses can go through 30 or 40 ideas in an hour. Its also spitting back ideas that no human would come up with.

The A.I.s designs are stronger and lighter than human-designed parts, and they would be very difficult to model with the traditional engineering tools that NASA uses. NASA/Henry Dennis

The resulting design is a third of the mass; its stiffer, stronger and lighter, he said. It comes up with things that, not only we wouldnt think of, but we wouldnt be able to model even if we did think of it.

Sometimes the A.I. errs in ways no human would: It might fill in a hole the part needs to attach to the rest of the craft.

Its like collaborating with an alien, he said.

Organize a messy computer desktop.

Alex Cai College sophomore

I had a lot of unsorted notes lying around, and I wanted to get them sorted into my file system so I can find them more easily in the future. I basically just gave ChatGPT a directory, a list of all my folder names, and the names of all my files. And it gave me a list of which notes should go into which folders!

Write a wedding speech.

Jonathan Wegener Occasional wedding officiant

Mr. Wegener and his girlfriend were officiating a friends wedding in December, but he procrastinated.

A few hours before, I said, Can GPT-3 write this officiant speech? he recalled. The first version was generic, full of platitudes. Then I steered it.

Adam is a great lover of plants

The speech came back with these beautiful metaphors. It nailed it. It was just missing one important part.

Can you add that thing about in sickness and in health?

Write an email.

Nicholas Wirth Systems administrator

Mr. Wirth uses ChatGPT to simplify tech jargon when he emails his bosses: My organization specifically pays me to keep the computers internet online, and my own literacy is limited. I work with C-level executives, and their time is not to be wasted.

He also gets it to generate first drafts of long emails. He might say:

I need a midsized summary email written pertaining to data not being given to us in time.

He also asks for a bullet-point list of the concerns that have to be addressed in the email.

And ChatGPT starts a reply:

Subject: Data not received in time Phone and internet provider information

Hello [Name],

I want to bring to your attention an issue we are facing with the data that was supposed to be provided to us by [Date.] As of now, we have not received the following information that is critical for our project

Get a first read.

Charles Becker Entrepreneurship professor

So Ill have a paragraph I might be putting into a test for a student, or instructions. I say:

Where might people have trouble with this? Whats unclear about this? Whats clear about this?

I generate a lot of writing both for my work and for my hobbies, and a lot of time I run out of people who are excited to give me first-pass edits.

Play devils advocate.

Paul De Salvo Data engineer

I use ChatGPT every day for work, he said. It feels like Ive hired an intern.

Part of Mr. De Salvos job is convincing his bosses that they should replace certain tools. That means pitching them on why the old tool wont cut it anymore.

I use ChatGPT to simulate arguments in favor of keeping the existing tool, he said. So that I can anticipate potential counterarguments.

Build a clock that gives you a new poem every minute.

Matt Webb Consultant and blogger

Yes, programmatic A.I. is useful, he said. But more than that, its enormous fun.

Organize research for a thesis.

Anicca Harriot Ph.D. student

Anicca Harriott has been powering through her Ph.D. thesis in biology with the help of Scholarcy and Scite, among other A.I. tools that find, aggregate and summarize relevant papers.

Collectively, they take weeks off of the writing process.

Skim dozens of academic articles.

Pablo Pea Rodrguez Private equity analytics director

Mr. Rodriguez works for a private equity fund that invests in soccer players. And that means reading a lot.

We use our own data sets and methodology, but I always want to have a solid understanding of the academic literature that has been published, he said.

Instead of picking through Google Scholar, he now uses an A.I. tool called Elicit. It lets him ask questions of the paper itself. It helps him find out, without having to read the whole thing, whether the paper touches on the question hes asking.

It doesnt immediately make me smart, but it does allow me to have a very quick sense of which papers I should pay attention to when approaching a new question.

Cope with ADHD

Rhiannon Payne Product marketer and author

With ADHD, getting started and getting an outline together is the hardest part, Ms. Payne said. Once thats done, its a lot easier to let the work flow.

She writes content to run marketing tests. To get going, she feeds GPT a few blog posts shes written on the subject, other materials shes gathered and the customer profile.

Describing the audience Im speaking to, that context is super important to actually get anything usable out of the tool, she said. What comes back is a starter framework she can then change and build out.

and dyslexia, too.

Eugene Capon Tech founder

Imagine yourself as a copywriter that I just hired to proofread documents.

Because Im dyslexic, it takes me a really long time to get an article down on paper, Mr. Capon said. So the hack Ive come up with is, Ill dictate my entire article. Then Ill have ChatGPT basically correct my spelling and grammar.

So something that was taking like a full day to do, I can now do in like an hour and a half.

Sort through an archive of pictures.

Daniel Patt Software engineer

On From Numbers to Names, a site built by the Google engineer Daniel Patt in his free time, Holocaust survivors and family members can upload photos and scan through half a million pictures to find other pictures of their loved ones. Its a task that otherwise would take a gargantuan number of hours.

Were really using the A.I. to save time, he said. Time is of the essence, as survivors are getting older. I cant think of any other way we could achieve what were doing with the identification and discoveries were making.

Transcribe a doctors visit into clinical notes.

Dr. Jeff Gladd Integrative medicine physician

Dr. Gladd uses Nablas Copilot to take notes during online medical consultations. Its a Chrome extension that listens into the visit and grabs the necessary details for his charts. Before: Writing up notes after a visit took about 20 percent of consult time. Now: The whole task lasts as long as it takes him to copy and paste the results from Copilot.

Appeal an insurance denial.

Dr. Jeffrey Ryckman Radiation oncologist

Dr. Ryckman uses ChatGPT to write the notes he needs to send insurers when theyve refused to pay for radiation treatment for one of his cancer patients.

What used to take me around a half-hour to write now takes one minute, he said.

Original post:

35 Ways Real People Are Using A.I. Right Now - The New York Times

Following are the top foreign stories at 1700 hours – Press Trust of India

Updated: Apr 15 2023 5:30PM

FGN19 UN-KAMBOJ-AI**** Safeguards needed to ensure AI systems are not misused or guided by biases: Amb KambojUnited Nations, Apr 15 (PTI) Artificial Intelligence, if harnessed properly, can generate enormous prosperity and opportunity, India has said, underscoring the need to ensure AI systems are not misused and that advancement of digital super intelligence must be symbiotic with humanity.By Yoshita Singh ****.

FGN3 US-DIGITAL INFRA-SITHARAMAN**** Digital Public Infrastructure inclusive by design, fast paces development process: Sitharaman Washington: Development and leveraging of digital public infrastructure, which is inclusive by design, can help countries fast pace their development processes and deliver huge benefits, Union Finance Minister Nirmala Sitharaman has said.By Lalit K Jha **** FGN14 US-CLIMATE CHANGE-LD PMPM Modi calls for mass movement in global fight against climate changeWashington: Prime Minister Narendra Modi has said that an idea becomes a mass movement when it moves from "discussion tables to dinner tables" as he called for people's participation and collective efforts in combating climate change. By Lalit K Jha.

FGN11 SAFRICA-GUPTAS**** Gupta brothers are still South African citizens: Home Affairs Minister MotsoalediJohannesburg: The South African government has said that fugitive Indian-origin businessmen Rajesh and Atul Gupta are still its citizens using the country's passports, amid reports that they have acquired citizenship of Vanuatu, an island nation in the South Pacific Ocean. By Fakir Hassen ****.

Please log in to get detailed story.

Read the original here:

Following are the top foreign stories at 1700 hours - Press Trust of India