{"id":169528,"date":"2024-06-03T02:39:38","date_gmt":"2024-06-03T06:39:38","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/openai-announces-new-safety-and-security-committee-as-the-ai-race-hots-up-and-concerns-grow-around-ethics-techradar\/"},"modified":"2024-08-18T12:48:30","modified_gmt":"2024-08-18T16:48:30","slug":"openai-announces-new-safety-and-security-committee-as-the-ai-race-hots-up-and-concerns-grow-around-ethics-techradar","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openai-announces-new-safety-and-security-committee-as-the-ai-race-hots-up-and-concerns-grow-around-ethics-techradar.php","title":{"rendered":"OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics &#8211; TechRadar"},"content":{"rendered":"<p><p>    OpenAI, the tech    company behind ChatGPT,    has announced that its formed a Safety and Security    Committee thats intended to make the firms approach to AI    more responsible and consistent in terms of security.  <\/p>\n<p>    Its no secret that OpenAI and CEO Sam    Altman - who will be on the committee - want to be the first to    reach AGI (Artificial General Intelligence), which is broadly    considered as achieving artificial intelligence that will    resemble human-like intelligence and can teach itself. Having    recently debuted     GPT-4o to the public, OpenAI is already training the    next-generation GPT model, which it expects to be one step    closer to AGI.  <\/p>\n<p>    GPT-4o was     debuted on May 13 to the public as a next-level multimodal    (capable of processing in multiple modes) generative        AI model, able to deal with input and respond with audio,    text, and images. It was met with a generally positive    reception, but more discussion around the innovation has since    arisen regarding its actual capabilities, implications, and the    ethics around technologies like it.  <\/p>\n<p>    Just over a week ago, OpenAI confirmed to Wired that its previous team    responsible for overseeing the safety of its AI models had been    disbanded and reabsorbed into other existing teams. This    followed the notable departures of key company figures like    OpenAI co-founder and chief scientist Ilya Sutskever, and    co-lead of the AI safety superalignment team Jan Leike. Their    departure was reportedly related to their concerns that OpenAI,    and Altman in particular, was not doing enough to develop its    technologies responsibly, and was forgoing conducting due    diligence.  <\/p>\n<p>    This has seemingly given OpenAI a lot to reflect on and its    formed the oversight committee in response. In the    announcement post about the committee being formed,    OpenAI also states that it welcomes a robust debate at this    important moment. The first job of the committee will be to    evaluate and further develop OpenAIs processes and    safeguards over the next 90 days, and then share    recommendations with the companys board.  <\/p>\n<\/p>\n<p>    The recommendations that are subsequently agreed upon to be    adopted will be shared publicly in a manner that is consistent    with safety and security.  <\/p>\n<p>    The committee will be made up of Chairman Bret Taylor, CEO of    Quora Adam DAngelo, and Nicole Seligman, a former executive of    Sony Entertainment,    alongside six OpenAI employees which includes Sam Altman as    mentioned, and John Schulman, a researcher and cofounder of    OpenAI. According to Bloomberg, OpenAI stated that it will    also consult external experts as part of this process.  <\/p>\n<p>            Sign up for breaking news, reviews, opinion, top tech            deals, and more.          <\/p>\n<p>    Ill reserve my judgment for when OpenAIs adopted    recommendations are published, and I can see how theyre    implemented, but intuitively, I dont have the greatest    confidence that OpenAI (or any major tech firm) is prioritizing    safety and ethics as much as they are trying to win the AI    race.  <\/p>\n<p>    Thats a shame, and its unfortunate that generally speaking,    those who are striving to be the best no matter what are often    slow to consider the cost and effects of their actions, and how    they might impact others in a very real way - even if large    numbers of people are potentially going to be affected.  <\/p>\n<p>    Ill be happy to be proven wrong and I hope I am, and in an    ideal world, all tech companies, whether theyre in the AI race    or not, should prioritize the ethics and safety of what theyre    doing at the same level that they strive for innovation. So far    in the realm of AI, that does not appear to be the case from    where Im standing, and unless there are real consequences, I    dont see companies like OpenAI being swayed that much to    change their overall ethos or behavior.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the article here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/openai-announces-new-safety-and-security-committee-as-the-ai-race-hots-up-and-concerns-grow-around-ethics\" title=\"OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar\">OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> OpenAI, the tech company behind ChatGPT, has announced that its formed a Safety and Security Committee thats intended to make the firms approach to AI more responsible and consistent in terms of security. Its no secret that OpenAI and CEO Sam Altman - who will be on the committee - want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openai-announces-new-safety-and-security-committee-as-the-ai-race-hots-up-and-concerns-grow-around-ethics-techradar.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234933],"tags":[],"class_list":["post-169528","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169528"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=169528"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169528\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=169528"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=169528"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=169528"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}