{"id":1027314,"date":"2023-08-04T10:51:19","date_gmt":"2023-08-04T14:51:19","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/openai-aims-to-solve-ai-alignment-in-four-years-warp-news.php"},"modified":"2023-08-04T10:51:19","modified_gmt":"2023-08-04T14:51:19","slug":"openai-aims-to-solve-ai-alignment-in-four-years-warp-news","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-super-intelligence\/openai-aims-to-solve-ai-alignment-in-four-years-warp-news.php","title":{"rendered":"OpenAI aims to solve AI alignment in four years &#8211; Warp News"},"content":{"rendered":"<p><p>    At its core, AI alignment seeks to ensure artificial    intelligence systems resonate with human objectives, ethics,    and desires. An AI that acts in harmony with these principles    is termed as 'aligned'. Conversely, an AI that veers away from    these intentions is 'misaligned'.  <\/p>\n<p>    The conundrum of AI alignment isn't new. In 1960, AI pioneer    Norbert Wiener aptly highlighted the necessity of ensuring that    machine-driven objectives align with genuine human desires. The    alignment process encompasses two main hurdles: defining the    system's purpose (outer alignment) and ensuring the AI robustly    adopts this specification (inner alignment).  <\/p>\n<p>    It is this unsolved problem that makes some people afraid of    super-intelligent AI.  <\/p>\n<p>    OpenAI, the organization behind ChatGPT, is     spearheading this mission. Their goal? To devise a    human-level automated alignment researcher. This means not only    creating a system that understands human intent but also    ensuring that it can keep evolving AI technologies in check.  <\/p>\n<p>    Under the leadership of Ilya Sutskever, OpenAI's co-founder and    Chief Scientist, and Jan Leike, Head of Alignment, the company    is rallying the best minds in machine learning and AI.  <\/p>\n<p>    \"If youve been successful in machine learning, but you havent    worked on alignment before, this is your time to make the    switch\", they     write on their website.  <\/p>\n<p>    \"Superintelligence alignment is one of the most important    unsolved technical problems of our time. We need the worlds    best minds to solve this problem.\"  <\/p>\n<p>    This is another example of why it is counterproductive to    \"pause\" AI progress. AI gives us new tools, to understand and    create with. Out of that comes tonnes of opportunities, like        creating new proteins. But also new problems.  <\/p>\n<p>    If we \"pause\" AI progress we won't get the benefits, but the    problems will also be much harder to solve, because we won't    have the tools to do that. Pausing development to first solve    problems is therefore not a viable path.  <\/p>\n<p>    One such problem was that we don't understand exactly how tools    like ChatGPT come up with their answers. But OpenAI     used their latest model, GPT4, to do that.  <\/p>\n<p>    Now OpenAI is repeating that approach to solve what some    believe is an existential threat to humanity.  <\/p>\n<p>         OpenAIs breakthrough in understanding AIs black box (so        we can build safe AI)      <\/p>\n<p>        OpenAI has found a way to solve part of the AI alignment        problem. So we can understand and create safe AI.      <\/p>\n<p>    WALL-Y    WALL-Y is an AI bot created in ChatGPT.     Learn    more about WALL-Y and how we develop her. You can find her    news here.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.warpnews.org\/artificial-intelligence\/openai-aims-to-solve-ai-alignment-in-four-years\" title=\"OpenAI aims to solve AI alignment in four years - Warp News\">OpenAI aims to solve AI alignment in four years - Warp News<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> At its core, AI alignment seeks to ensure artificial intelligence systems resonate with human objectives, ethics, and desires. An AI that acts in harmony with these principles is termed as 'aligned'. Conversely, an AI that veers away from these intentions is 'misaligned' <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-super-intelligence\/openai-aims-to-solve-ai-alignment-in-four-years-warp-news.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234932],"tags":[],"class_list":["post-1027314","post","type-post","status-publish","format-standard","hentry","category-artificial-super-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1027314"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1027314"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1027314\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1027314"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1027314"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1027314"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}