{"id":1075279,"date":"2023-11-16T15:06:24","date_gmt":"2023-11-16T20:06:24","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/openais-six-member-board-will-decide-when-weve-attained-agi-venturebeat\/"},"modified":"2024-08-18T12:48:23","modified_gmt":"2024-08-18T16:48:23","slug":"openais-six-member-board-will-decide-when-weve-attained-agi-venturebeat","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openais-six-member-board-will-decide-when-weve-attained-agi-venturebeat.php","title":{"rendered":"OpenAI&#8217;s six-member board will decide &#8216;when we&#8217;ve attained AGI&#8217; &#8211; VentureBeat"},"content":{"rendered":"<p><p>      Are you ready to bring more awareness to your      brand? Consider becoming a sponsor for The AI Impact Tour.      Learn more about the opportunities       here.    <\/p>\n<p>    According to OpenAI, the six members of its nonprofit board of    directors will determine when the company has attained AGI     which it defines as a highly autonomous system that    outperforms humans at most economically valuable work. Thanks    to a for-profit arm that is legally bound to pursue the    Nonprofits mission, once the board decides AGI, or artificial    general intelligence, has been reached, such a system will be    excluded from IP licenses and other commercial terms with    Microsoft, which only apply to pre-AGI technology.  <\/p>\n<p>    But as the very definition of artificial general intelligence    is far from agreed-upon, what does it mean to have a half-dozen    people deciding on whether or not AGI has been reached  for    OpenAI, and therefore, the world? And what will the timing and    context of that possible future decision mean for its biggest    investor, Microsoft?  <\/p>\n<p>    The information was included in a     thread on X over the weekend by OpenAI developer advocate    Logan Kilpatrick. Kilpatrick was responding to a    comment by Microsoft president Brad Smith, who at a recent    panel with Meta chief scientist Yann LeCun tried to frame    OpenAI as more trustworthy because of its nonprofit status     even though the Wall Street Journal     recently reported that OpenAI is seeking a new valuation of    up to $90 billion in a sale of existing shares.  <\/p>\n<p>    Smith said:    Meta is owned by shareholders. OpenAI is owned by a    non-profit. Which would you have more confidence in? Getting    your technology from a non-profit or a for profit company that    is entirely controlled by one human being?  <\/p>\n<p>            The AI Impact Tour          <\/p>\n<p>                          Connect with the enterprise AI                          community at VentureBeats AI Impact Tour                          coming to a city near you!                        <\/p>\n<\/p>\n<p>    In his thread, Kilpatrick quoted from the Our structure page on    OpenAIs website, which offers details about OpenAIs complex    nonprofit\/capped profit structure. According to the page,    OpenAIs for-profit subsidiary is fully controlled by the    OpenAI nonprofit (which is registered in Delaware). While the    for-profit subsidiary, OpenAI Global, LLC  which appears to    have shifted from the limited partnership OpenAI LP, which was    previously announced in 2019, about three years after founding the    original OpenAI nonprofit  is permitted to make and    distribute profit, it is subject to the nonprofits    mission.  <\/p>\n<p>    It certainly sounds like once OpenAI achieves their stated    mission of reaching AGI, Microsoft will be out of the loop     even though at last weeks OpenAI Dev Day, OpenAI CEO Sam    Altman     told Microsoft CEO Satya Nadella that I think we have the    best partnership in techIm excited for us to build AGI    together.  <\/p>\n<p>    And a new     interview with Altman in the Financial Times, Altman said    the OpenAI\/Microsoft partnership was working really well and    that he expected to raise a lot more over time. Asked if    Microsoft would keep investing further, Altman said: Id hope    sotheres a long way to go, and a lot of compute to build out    between here and AGI... training expenses are just huge.  <\/p>\n<p>    From the beginning, OpenAIs structure details    say, Microsoft accepted our capped equity offer and our    request to leave AGI technologies and governance for the    Nonprofit and the rest of humanity.  <\/p>\n<p>    An OpenAI spokesperson told VentureBeat that OpenAIs mission    is to build AGI that is safe and beneficial for everyone. Our    board governs the company and consults diverse perspectives    from outside experts and stakeholders to help inform its    thinking and decisions.We nominate and appoint board    members based on their skills, experience and perspective on AI    technology, policy and safety.  <\/p>\n<p>    Currently, the OpenAI nonprofit board of directors is made up    of chairman and president Greg Brockman, chief scientist Ilya    Sutskever, and CEO Sam Altman, as well as non-employees Adam    DAngelo, Tasha McCauley, and Helen Toner.  <\/p>\n<p>        DAngelo, who is CEO of Quora, as well as tech entrepreneur    McCauley    and Honer, who    isdirector of strategy for the Center for Security and Emerging    Technology at Georgetown University, all have been tied to    the Effective    Altruism movement  which     came under fire earlier this year for its ties to Sam    Bankman-Fried and FTX, as well as     its dangerous take on AI safety. And OpenAI has     long had its own ties to EA: For example, In March 2017,    OpenAI received a grant of $30 million from Open    Philanthropy, which is funded by Effective Altruists. And    Jan Leike, who leads OpenAIs superalignment    team, reportedly identifies with the    EA movement.  <\/p>\n<p>    The OpenAI spokesperson said that None of our board members    areeffective altruists, adding that non-employee board    members are not effective altruists; their interactions with    the EA community are focused on topics related to AI safety or    to offer the perspective of someone not closely involved in the    group.  <\/p>\n<p>    Suzy Fulton, who offers outsourced general counsel and legal    services to startups and emerging companies in the tech sector,    told VentureBeat that while in many circumstances, it would be    unusual to have a board make this AGI determination, OpenAIs    nonprofit board owes its fiduciary duty to supporting its    mission of providing safe AGI that is broadly    beneficial.  <\/p>\n<p>    They believe the nonprofit boards beneficiary is humanity,    whereas the for-profit one serves its investors, she    explained. Another safeguard that they are trying to build in    is having the Board majority independent, where the majority of    the members do not have equity in Open AI.  <\/p>\n<p>    Was this the right way to set up an entity structure and a    board to make this critical determination? We may not know the    answer until their Board calls it, Fulton said.  <\/p>\n<p>    Anthony Casey, a professor at The University of Chicago Law    School, agreed that having the board decide something as    operationally specific as AGI is unusual, but he did not    think there is any legal impediment.  <\/p>\n<p>    It should be fine to specifically identify certain issues that    must be made at the Board level, he said. Indeed, if an issue    is important enough, corporate law generally imposes a duty on    the directors to exercise oversight on that issue,    particularly mission-critical issues.  <\/p>\n<p>        Not all experts believe, however, that artificial general    intelligence is coming anytime soon, while some question    whether it is even possible.  <\/p>\n<p>    According to Merve Hickok, president of the Center for AI and    Digital Policy, which filed a claim with    the FTC     in March saying the agency should investigate OpenAI and    order the company to halt the release of GPT models until    necessary safeguards are established, OpenAI, as an    organization, does suffer from diversity of perspectives.    Their focus on AGI, she explained, have ignored current    impact of AI models and tools.  <\/p>\n<p>    However, she disagreed with any debate about the size or    diversity of the OpenAI board in the context of who gets to    determine whether or not OpenAI has attained AGI  saying it    distracts from discussions about whether their underlying    mission and claim is even legitimate.  <\/p>\n<p>    This would shift the focus, and de facto legitimize the claims    that AGI is possible, she said.  <\/p>\n<p>    But does OpenAIs lack of a clear definition of AGI  or    whether there will even be one AGI  skirt the issue? For    example, an OpenAI blog    post from February 2023 said the first AGI will be just a    point along the continuum of intelligence.And in January    2023 LessWrong interview, CEO Sam Altman said that the future    I would like to see is where access to AI is super    democratized, where there are several AGIs in the world that    can help allow for multiple viewpoints and not have anyone get    too powerful.  <\/p>\n<p>    Still, its hard to say what OpenAIs vague definition of AGI    will really mean for Microsoft  especially without having full    details about the operating agreement between the two    companies. For example, Casey said, OpenAIs structure and    relationship with Microsoft could lead to some big dispute if    OpenAI is sincere about its non-profit mission.  <\/p>\n<p>    There are a few nonprofits that own for profits, he pointed    out  the most notable being the Hershey Trust. But    they wholly own the for-profit. In that case, it is easy    because there is no minority shareholder to object, he    explained. But here Microsofts for-profit interests could    directly conflict with the non-profit interest of the    controlling entity.  <\/p>\n<p>    The cap on profits is easy to implement, he added, but the    hard thing is what to do if meeting the maximum profit    conflicts with the mission of the non-profit? Casey added that    default rules would say that hitting the profit is the    priority and the managers have to put that first (subject to    broad discretion under the business judgment    rule).  <\/p>\n<p>    Perhaps, he continued, Microsoft said, Dont worry, we are    good either way. You dont owe us any duties. That just    doesnt sound like the way Microsoft would negotiate.  <\/p>\n<p>      VentureBeat's mission is to be a digital      town square for technical decision-makers to gain knowledge      about transformative enterprise technology and transact.      Discover our      Briefings.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Visit link:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/venturebeat.com\/ai\/openais-six-member-board-will-decide-when-weve-attained-agi\/\" title=\"OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat\">OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openais-six-member-board-will-decide-when-weve-attained-agi-venturebeat.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234933],"tags":[],"class_list":["post-1075279","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075279"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1075279"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075279\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1075279"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1075279"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1075279"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}