{"id":1123094,"date":"2024-03-18T11:29:50","date_gmt":"2024-03-18T15:29:50","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/companies-like-morgan-stanley-are-already-making-early-versions-of-agi-observer\/"},"modified":"2024-03-18T11:29:50","modified_gmt":"2024-03-18T15:29:50","slug":"companies-like-morgan-stanley-are-already-making-early-versions-of-agi-observer","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/companies-like-morgan-stanley-are-already-making-early-versions-of-agi-observer\/","title":{"rendered":"Companies Like Morgan Stanley Are Already Making Early Versions of AGI &#8211; Observer"},"content":{"rendered":"<p><p>Companies like Morgan Stanley are already laying the      groundwork for so-called organizational AGI. Maxim Tolchinskiy\/Unsplash        <\/p>\n<p>    Whether its being theorized or possibly, maybe    actualized, artificial general intelligence, or    AGI, has become a frequent topic of conversation in    a world where people are now routinely talking with machines.    But theres an inherent problem with the term AGIone rooted in    perception. For starters, assigning intelligence to a system    instantly anthropomorphizes it, adding to the perception that    theres the semblance of a human mind operating behind the    scenes. This notion of a mind deepens the perception that    theres some single entity manipulating all of this human-grade    thinking.  <\/p>\n<p>    This problematic perception is compounded by the fact    that large language models (LLMs) like ChatGPT, Bard, Claude    and others make a mockery of the Turing test. They seem very    human indeed, and its not surprising that people have turned    to LLMs as therapists, friends and lovers (sometimes with    disastrous    results). Does the humanness of their    predictive abilities amount to some kind of general    intelligence?  <\/p>\n<p>    By some estimates, the critical aspects of AGI have    already been achieved by the LLMs mentioned above.    A recent article in    Noema by Blaise Agera Y Arcas (vice president    and fellow at Google Research) and Peter Norvig (a computer    scientist at the Stanford Institute for Human-Centered A.I.)    argues that todays frontier models perform competently even    on novel tasks they were not trained for, crossing a threshold    that previous generations of A.I. and supervised deep learning    systems never managed. Decades from now, they will be    recognized as the first true examples of AGI.  <\/p>\n<p>    For others, including OpenAI, AGI is still out in front of us. We    believe our research will eventually lead to artificial general    intelligence, their research page proclaims,    a system that can solve human-level problems.  <\/p>\n<p>    Whether nascent forms of AGI are already here or are    still a few years away, its likely that businesses attempting    to harness these powerful technologies might create a miniature    version of AGI. Businesses need technology ecosystems that can    mimic human intelligence with the cognitive flexibility to    solve increasingly complex problems. This ecosystem needs to    orchestrate using existing software, understand routine tasks,    contextualize massive amounts of data, learn new skills, and    work across a wide range of domains. LLMs on their own can only    perform a fraction of this workthey seem most useful as part    of a conversational interface that lets people talk to    technology ecosystems. There are strategies being used right    now by leading enterprise companies to move in this direction    toward something we might call organizational AGI.  <\/p>\n<p>    There are legitimate reasons to be wary of yet another    unsolicited tidbit in the A.I. terms slush pile. Regardless of    what we choose to call the eventual outcome of these    activities, there are currently organizations using LLMs as an    interface layer. They are creating ecosystems where users can    converse with software through channels like rich web chat    (RCW), obscuring machinations happening behind the scenes. This    is difficult work, but the payoff is huge: rather than    pogo-sticking between apps to get something done on a computer,    customers and employees can ask technology to run tasks for    them. Theres the immediate and tangible benefit of people    eliminating tedious tasks from their lives. Then theres the    long term benefit of a burgeoning ecosystem where employees and    customers are interacting with digital teammates that can    perform automations leveraging all forms of data across an    organization. This is an ecosystem that starts to take the form    of a digital twin.  <\/p>\n<p>    McKinsey describes a digital twin as    a virtual replica of a physical    object, person, or process that can be used to simulate its    behavior to better understand how it works in real life. They    elaborate to say that a digital twin within an ecosystem    similar to what Ive described can become an enterprise    metaverse, a digital and often immersive environment that    replicates and connects every aspect of an organization to    optimize simulations, scenario planning and decision    making.  <\/p>\n<p>    With respect to what I said earlier about    anthropomorphizing technology, the digital teammates within    this kind of ecosystem are an abstraction, but I think of them    as intelligent digital workers, or IDWs. IDWs are analogous to    a collection of skills. These skills come from shared    libraries, and skills can be adapted and reused in multitudes    of ways. Skills are able to take advantage of all the    information piled up inside the organization, with LLMs mining    unstructured data, like emails and recorded calls.  <\/p>\n<p>    This data becomes more meaningful thanks to graph    technology, which is adept at creating indexes of skills,    systems and data sources. Graph goes beyond mere listing and    includes how these elements relate to and interact with each    other. One of the core strengths of graph technology is its    ability to represent and analyze relationships. For a network    of IDWs, understanding how different components are interlinked    is crucial for efficient orchestration and data flow.  <\/p>\n<p>    Generative tools like LLMs and graph technology can work    together in tandem, to propel the journey toward digital    twinhood, or organizational AGI. Twins can encompass all    aspects of the business, including events, data, assets,    locations, personnel and customers. Digital twins are likely to    be low-fidelity at first, offering a limited view of the    organization. As more interactions and processes take place    within the org, however, the fidelity of the digital twin    becomes higher. An organizations technology ecosystem not only    understands the current state of the organization. It can also    adapt and respond to new challenges autonomously.  <\/p>\n<p>    In this sense every part of an organization represents an    intelligent awareness that comes together around common goals.    In my mind, it mirrors the nervous system of a cephalopod. As    Peter Godfrey-Smith writes in his book, Other    Minds (2016, Farrar, Straus and Giroux), in    an octopus, the majority of neurons are in the arms    themselvesnearly twice as many in total as in the central    brain. The arms have their own sensors and controllers. They    have not only the sense of touch but also the capacity to sense    chemicalsto smell or taste. Each sucker on an octopuss arm    may have 10,000 neurons to handle taste and touch. Even an arm    that has been surgically removed can perform various basic    motions, such as reaching and grasping.  <\/p>\n<p>    A world teeming with self-aware brands would be quite    hectic. According to Gartner, by 2025, generative A.I. will be    a workforce partner within 90 percent of companies    worldwide. This doesnt mean that all of these    companies will be surging toward organizational AGI, however.    Generative A.I., and LLMs in particular, cant meet an    organizations automation needs on its own. Giving an entire    workforce access to GPTs or Copilot wont move the needle much    in terms of efficiency. It might help people write better    emails faster, but it takes a great deal of work to make LLMs    reliable resources for user queries.  <\/p>\n<p>    Their hallucinations have been well documented and    training them to provide trustworthy information is a herculean    effort. Jeff McMillan, chief    analytics and data officer at Morgan Stanley (MS),    told me it took his team nine months to train    GPT-4 on more than 100,000 internal documents. This work    began before the launch of ChatGPT, and Morgan Stanley had the    advantage of working directly with people at OpenAI. They were    able to create a personal assistant that the investment banks    advisors can chat with, tapping into a large portion of its    collective knowledge. Now youre talking about wiring it up to    every system, he said, with regards to creating the kinds of    ecosystems required for organizational A.I. I dont know if    thats five years or three years or 20 years, but what Im    confident of is that that is where this is going.  <\/p>\n<p>    Companies like Morgan Stanley that are already laying the    groundwork for so-called organizational AGI have a massive    advantage over competitors that are still trying to decide how    to integrate LLMs and adjacent technologies into their    operations. So rather than a world awash in self-aware    organizations, there will likely be a few market leaders in    each industry.  <\/p>\n<p>    This relates to broader AGI in the sense that these    intelligent organizations are going to have to interact with    other intelligent organizations. Its hard to envision exactly    what depth of information sharing will occur between these    elite orgs, but over time, these interactions might play a role    in bringing about AGI or singularity, as its also    called.  <\/p>\n<p>    Ben Goertzel, the founder of SingularityNET and the    person often credited with creating the term, makes a compelling case that AGI    should be decentralized, relying on    open-source development as well as decentralized hosting and    mechanisms for interconnect A.I. systems to learn from and    teach on another.  <\/p>\n<p>    SingularityNETs DeAGI Manifesto states, There is a    broad desire for AGI to be ethical and beneficial for all    humanity; the most straightforward way to achieve this seems to    be for AGI to grow up in the context of serving and being    guided by all humanity, or as good an approximation as can be    mustered.  <\/p>\n<p>    Having AGI manifest in part from the aggressive    activities of for-profit enterprises is dicey. As Goertzel    pointed out, You get into questions [about] who owns and    controls these potentially spooky and configurable human-like    robot assistants  and to what extent is their fundamental    motivation to help people as opposed to sell people stuff or    brainwash people into some corporate government media    advertising order.  <\/p>\n<p>    Theres a strong case to be made that an allegiance to    profit will be the undoing of the promise for humanity at large    that these technologies afford. Weirdly, the skynet scenario in    Terminatorwhere a system becomes self-aware, determines    humanity is a grave threat, and exterminates all lifeassumes    that the system, isolated to a single company, has been    programmed to have a survival instinct. It would have to be    told that survival at all costs is its bottom line, which    suggests we should be extra cautious developing these systems    within environments where profit above all else is the    dictum.  <\/p>\n<p>    Maybe the most important thing is keeping this technology    in the hands of humans and pushing forward the idea that the    myriad technologies associated with A.I. should only be used in    ways that are beneficial to humanity as a whole, that dont    exploit marginalized groups, and that arent propagating    synthesized bias at scale.  <\/p>\n<p>    When I broached some of these ideas about organizational    AGI to Jaron Lanier,    co-creator of VR technology as we know it and Microsofts    Octopus (Office of the Chief Technology Officer Prime Unifying    Scientist), he told me my vocabulary was nonsensical and that    my thinking wasnt compatible with his perception of    technology. Regardless, it felt like we agreed on core aspects    of these technologies.  <\/p>\n<p>    I dont think of A.I. as creating new entities. I think    of it as a collaboration between people, Lanier said.    Thats the only way to think about using it wellto me its    all a form of collaboration. The sooner we see that, the sooner    we can design useful systemsto me theres only people.  <\/p>\n<p>    In that sense, AGI is yet another tool, way down the    spectrum from the rocks our ancestors used to smash tree nuts.    Its a manifestation of our ingenuity and our desires. Are we    going to use it to smash every tree nut on the face of the    earth, or are we going to use it to find ways to grow enough    tree nuts for everyone to enjoy? The trajectories we set in    these early moments are of grave importance.  <\/p>\n<p>    Were in the anthropocene. Were in an era where our    actions are affecting everything in our biological    environment, Blaise Aguera Y Arcas, the Noeme    article author, told me. The    Earth is finite and without the kind of solidarity where we    start to think about the whole thing as our body, as it were,    were kind of screwed.  <\/p>\n<\/p>\n<p>    Josh Tyson is the    co-author of Age of Invisible    Machines, a book about conversational A.I.,    and Director of Creative Content at OneReach.ai. He    co-hosts two podcasts: Invisible    Machines and    N9K.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See original here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/observer.com\/2024\/03\/organizational-artificial-general-intelligence\" title=\"Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer\">Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Companies like Morgan Stanley are already laying the groundwork for so-called organizational AGI. Maxim Tolchinskiy\/Unsplash Whether its being theorized or possibly, maybe actualized, artificial general intelligence, or AGI, has become a frequent topic of conversation in a world where people are now routinely talking with machines.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/companies-like-morgan-stanley-are-already-making-early-versions-of-agi-observer\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1123094","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1123094"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1123094"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1123094\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1123094"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1123094"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1123094"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}