{"id":1116772,"date":"2023-08-02T19:10:12","date_gmt":"2023-08-02T23:10:12","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/openai-forms-specialized-team-to-align-superintelligent-ai-with-fagen-wasanni\/"},"modified":"2023-08-02T19:10:12","modified_gmt":"2023-08-02T23:10:12","slug":"openai-forms-specialized-team-to-align-superintelligent-ai-with-fagen-wasanni","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/openai-forms-specialized-team-to-align-superintelligent-ai-with-fagen-wasanni\/","title":{"rendered":"OpenAI Forms Specialized Team to Align Superintelligent AI with &#8230; &#8211; Fagen wasanni"},"content":{"rendered":"<p><p>    OpenAI, the company responsible for the development of ChatGPT,    has established a dedicated team aimed at aligning    superintelligent AI with human values. Led by Ilya Sutskever    and Jan Leike, the team is allocating 20 percent of OpenAIs    compute power to tackle the challenges of superintelligence    alignment within a span of four years.  <\/p>\n<p>    AI alignment refers to the process of ensuring that artificial    intelligence systems adhere to human objectives, ethics, and    desires. When an AI system operates in accordance with these    principles, it is considered to be aligned, whereas an AI    system that deviates from these intentions is classified as    misaligned. This dilemma has been recognized since the early    days of AI, with Norbert Wiener emphasizing the importance of    aligning machine-driven objectives with genuine human desires    back in 1960. The alignment process involves overcoming two    main hurdles: defining the purpose of the system (outer    alignment) and ensuring that the AI robustly adheres to this    specification (inner alignment).  <\/p>\n<p>    OpenAIs mission is to achieve superalignment within four    years, with the aim of creating an automated alignment    researcher at a human-level. This involves not only developing    a system that understands human intent, but also one that can    effectively regulate the advancements in AI technologies. To    achieve this goal, OpenAI, under the guidance of Ilya Sutskever    and Jan Leike, is assembling a team consisting of experts in    machine learning and AI, inviting those who have not previously    worked on alignment to contribute their expertise.  <\/p>\n<p>    The establishment of this specialized team addresses one of the    most crucial unsolved technical problems of our    timesuperintelligence alignment. OpenAI recognizes the    significance and urgency of this problem and calls upon the    worlds top minds to unite in solving it. It is through the    continued progress of AI that we gain valuable tools to    understand and create, which brings about numerous    opportunities. Pausing AI development to exclusively address    problems would hinder progress and make problem-solving even    more challenging due to a lack of appropriate tools.  <\/p>\n<p>    OpenAIs previous breakthrough in understanding AIs inner    workings with its GPT4 model serves as a foundation for    addressing the potential existential threat that    superintelligent AI presents to humanity. Through their    efforts, OpenAI aims to develop safe and comprehensible AI    systems, thereby mitigating any associated risks.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/fagenwasanni.com\/news\/openai-forms-specialized-team-to-align-superintelligent-ai-with-human-values\/99073\/\" title=\"OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni\">OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> OpenAI, the company responsible for the development of ChatGPT, has established a dedicated team aimed at aligning superintelligent AI with human values. Led by Ilya Sutskever and Jan Leike, the team is allocating 20 percent of OpenAIs compute power to tackle the challenges of superintelligence alignment within a span of four years <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/openai-forms-specialized-team-to-align-superintelligent-ai-with-fagen-wasanni\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-1116772","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116772"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1116772"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116772\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1116772"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1116772"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1116772"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}