Coalition of AI leaders sees ‘societal-scale risks’ from the … – SiliconANGLE News

Posted: June 2, 2023 at 8:17 pm

A statement issued today and signed by more than 375 computer scientists, academics and business leaders warns of profound risks of artificial intelligence misuse and says the potential problems posed by the technology be given the same urgency as pandemics and nuclear war.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, said the statement issued by the Center for AI Safety, a nonprofit organization dedicated to reducing AI risk.

The statement caps a flurry of recent calls by AI researchers and companies developing AI-based technologies to impose some form of government regulation on models to prevent them from being misused or creating unintended negative consequences.

Earlier this month, OpenAI LLC Sam Altman told the Senate Judiciary subcommittee that the U.S. government should consider licensing or registration requirements on AI models and that companies developing them should adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results.

Microsoft Corp. last week called for a set of regulations to be imposed on systems used in critical infrastructure as well as expanded laws clarifying the legal obligations of AI models and labels that make it clear when a computer produces an image or video.

Altman and OpenAI Co-Founder Ilya Sutskever were among the signatories to todays statement. Others include Demis Hassabis, chief executive of Google LLCs DeepMind; Microsoft Chief Technology Officer Kevin Scott; cybersecurity expert Bruce Schneier; and the co-founders of safe AI unicorn Anthropic PBC. Geoffrey Hinton, a Turing Award winner who earlier this month left Google over concerns about AIs potential for misuse, also signed the statement.

The Center for AI Safety sites eight principal risks that are inherent in AI. These include military weaponization, malicious misinformation and enfeeblement, in which humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E.

The center also expresses concerns that highly competent systems could give small groups of people too much power, exhibit unexplainable behavior and even intentionally deceive humans.

All this activity comes after the public release last November of OpenAIs ChatGPT intelligent chatbot. The uncanny humanlike interactive capabilities of the generative model have galvanized attention around AIs potential to replace human labor and have given birth to a host of competitors.

However, subsequent media reports detailing the tendency of models to sometimes also exhibit bizarre and hallucinatory behavior have also raised concerns about the black box nature of some AI models and sparked calls for better transparency and accountability.

The statement drew praise from many quarters. If large language models continue to advance, they will surpass human ability tenfold, wrote Nimrod Partush, vice president of AI and data science at cybersecurity analytics firm Cyesec Ltd., in emailed comments. There is potential for a real existential risk for mankind. I am leaning toward seeing AI as a benevolent force for humanity, but I would still recommend extreme precautions.

Philosophically I believe the private sector should take care of AI governance but thats not going to happen, saidKen Cox, president of web hosting service Hostirian LLC. Unfortunately, I believe the government should have some regulations on AI, but they need to be minimal and we need great leaders stepping up and educating through the process.

However, not everyone is convinced of AIs doomsday potential and some questioned the groups motives in publicizing the statement so aggressively.

If thetop executives of the top AI companies believe AI creates a risk of human extinction, why dont they stop working on it instead of publishing press releases? wrote software developerDare Obasanjo on Bluesky Social.

Their macho chest-thumping is pure marketing, wrotemedia pundit Jeff Jarvis.

To the extent these risks are real, and many of them are, its up to them, the developers and companies that own this technology and will use it, to come together and create industry standards,tweetedYaron Brook, chairman of the board at the Ayn Rand Institute. Stop running to government to solve your issues.

Business executives have been transfixed by the topic. A recent survey of senior executives by Gartner, Inc. found that AI is the technology they believe will most significantly impact their industries over the next three years.

THANK YOU

Here is the original post:

Coalition of AI leaders sees 'societal-scale risks' from the ... - SiliconANGLE News

Related Posts