{"id":1067845,"date":"2024-03-02T02:39:12","date_gmt":"2024-03-02T07:39:12","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/causal-ai-ai-confesses-why-it-did-what-it-did-informationweek\/"},"modified":"2024-08-18T11:39:56","modified_gmt":"2024-08-18T15:39:56","slug":"causal-ai-ai-confesses-why-it-did-what-it-did-informationweek","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/machine-learning\/causal-ai-ai-confesses-why-it-did-what-it-did-informationweek.php","title":{"rendered":"Causal AI: AI Confesses Why It Did What It Did &#8211; InformationWeek"},"content":{"rendered":"<p><p>    The holy grail in AI development is    explainable AI, which is a means to reveal the decision-making    processes that the AI model used to arrive at its output. In    other words, we humans want to know why the AI did what it    did before    we staked our careers, lives, or    businesses on its outputs.  <\/p>\n<p>    Causal AI requires models to    explain their prediction. In its simplest form, the explanation    is a graph representing a cause-and-effect chain, says George    Williams, GSI    Technologys director of ML, data science and embedded AI.    In its modern form, its a human understandable explanation in    the form of text, he says.  <\/p>\n<p>    Typically, AI models have no    auditable trails in its decision-making, no self-reporting    mechanisms, and no way to peer behind the cloaking curtains of    increasingly complicated algorithms.  <\/p>\n<p>    Traditional predictive AI can be    likened to a black box where its nearly impossible to tell    what drove an individual result, says Phil Johnson, VP data    solutions at mPulse.  <\/p>\n<p>    As a result, humans can trust hardly    anything an AI model delivers. The output could be a    hallucination -- a lie, fabrication, miscalculation, or a    fairytale, depending on how generous you want to be in labeling    such errors and what type of AI model is being used.  <\/p>\n<p>    GenAI models still have the    unfortunate side-effect of hallucinating or making up facts    sometimes. This means they can also hallucinate their    explanations. Hallucination mitigation is a rapidly evolving    area of research, and it can be difficult for organizations to    keep up with the latest research\/techniques, says    Williams.  <\/p>\n<p>    Related:5 Ways to Use AI You May Have Never Even    Considered  <\/p>\n<p>    On the other hand, that same AI    model could reveal a profound truth humans cannot see because    their view is obscured by huge volumes of data.  <\/p>\n<p>    Like the proverbial army of monkeys    pounding on keyboards may one day produce a great novel, many    crowds of humans may one day trip across an important insight    buried in ginormous stores of data. Or we can lean on the    speed of AI to find a useful answer now and focus on teaching    it to reveal how it came to that conclusion. The latter is far    more manageable than the former.  <\/p>\n<p>    If one gets anything out of the    experience of working with AI, it should be the re-discovery of    the marvel that is the human brain. The more we fashion AI    after our own brains, the more ways we find it a mere shadow of    our own astounding capabilities.  <\/p>\n<p>    And thats not a diss on AI, which    is a truly astounding invention and itself a testament to human    capabilities. Nonetheless, the creators truly want to know what    the creation is actually up to.  <\/p>\n<p>    Related:What to Know About Machine Customers  <\/p>\n<p>    Most AI\/ML is correlational in    nature, not causal, explains David Guarrera,    EY    Americas generative AI leader. So, you cant say much    about the direction of the effect. If age and salary correlate,    you dont technically know if being older CAUSES you to have    more money or money CAUSES you to age, he says.  <\/p>\n<p>    Most of us would intuitively agree    that its the lack    of money that causes one to age, but    we cant reliably depend on our intuition to evaluate the AIs    output. Neither can we rely on AI to explain itself -- mostly    because it wasnt designed to do so.  <\/p>\n<p>    In many advanced machine learning    models such as deep learning, massive amounts of data are    ingested to create a model, says Judith Hurwitz, chief    evangelist, Geminos    Software and author of Causal    Artificial Intelligence: The Next Step in Effective Business    AI. One of the key issues with this approach to    AI is that the models created by the data cannot be easily    understood by the business. They are, therefore, not    explainable.In addition, it is easy to create a biased    result depending on the quality of the data used to create the    model, she says.  <\/p>\n<p>    This issue is commonly referred to    as AIs black box. Breaking into the innards of an AI model    to retrieve the details of its decision-making is no small    task, technically speaking.  <\/p>\n<p>    Related:Implementing Generative AI for Business    Success  <\/p>\n<p>    This involves the use of causal    inference theories and graphical models, such as directed    acyclic graphs (DAGs), which help in mapping out and    understanding the causal relationships between variables, says    Ryan Gross, head of data and applications at    Caylent. By manipulating one variable,    causal AI can observe and predict how this change affects other    variables, thereby identifying cause-and-effect    relationships.  <\/p>\n<p>    Traditional AI models are fixed in    time and understand nothing. Causal AI is a different animal    entirely.  <\/p>\n<p>    Causal AI is dynamic, whereas    comparable tools are static. Causal AI represents how an event    impacts the world later. Such a model can be queried to find    out how things might work, says Brent Field at    Infosys    Consulting. On the other hand, traditional machine    learning models build a static representation of what    correlates with what. They tend not to work well when the world    changes, something statisticians call nonergodicity, he    says.  <\/p>\n<p>    Its important to grok why this one    point of nonergodicity is such a crucial difference to almost    everything we do.  <\/p>\n<p>    Nonergodicity is everywhere. Its    this one reason why money managers generally underperform the    S&P 500 index funds. Its why election polls are often off    by many percentage points. Commercial real estate and global    logistics models stopped working about March 15, 2020, because    COVID caused this massive supply-side economic shock that is    still reverberating through the world economy, Field    explains.  <\/p>\n<p>    Without knowing the cause of an    event or potential outcome, the knowledge we extract from AI is    largely backward facing even when it is forward predicting.    Outputs based on historical data and events alone are by nature    handicapped and sometimes useless. Causal AI seeks to remedy    that.  <\/p>\n<p>    Causal models allow humans to be    much more involved and aware of the decision-making process.    Causal models are explainable and debuggable by default --    meaning humans can trust and verify results -- leading to    higher trust, says Joseph Reeve, software engineering manager    at Amplitude. Causal models also allow human    expertise through model design to be leveraged when training a    model, as opposed to traditional models that need to be trained    from scratch, without human guidance, he says.  <\/p>\n<p>    Can causal AI be applied even to    GenAI models? In a word, yes.  <\/p>\n<p>    We could use causal AI to analyze a    large amount of data and pair it with GenAI to visualize the    analysis using graphics or explanations, says Mohamed    Abdelsadek, EVP data, insights, and analytics at    Mastercard. Or, on the flip side, GenAI could    be engaged to identify the common analysis questions at the    beginning, such as the pictures of damage caused by a natural    event, and causal AI would be brought in to execute the data    processing and analysis, he says.  <\/p>\n<p>    There are other ways causal AI and    GenAI can work together, too.  <\/p>\n<p>    Generative AI can be an effective    tool to support causal AI. However, keep in mind that GenAI is    a tool not a solution, says Geminos Softwares Hurwitz. One    of the emerging ways that GenAI can be hugely beneficial in    causal AI is to use these tools to analyze subject matter    information stored in both structured and instructed formats.    One of the essential areas needed to create an effective causal    AI solution is the need for what is called causal discovery --    determining what data is needed to understand cause and    effect, she says.  <\/p>\n<p>    Does this mean that causal AI is a    panacea for all of AI or that it is an infallible    technology?  <\/p>\n<p>    Causal AI is a nascent field.    Because the technology is not completely developed yet, the    error rates tend to be higher than expected, especially in    domains that dont have sufficient training for the AI system,    says Flavio Villanustre, global chief information security    officer of LexisNexis    Risk Solutions. However, you should expect this to improve    significantly with time.  <\/p>\n<p>    So where does causal AI stand in the    scheme of things?  <\/p>\n<p>    In 2022    Gartner Hype Cycle, causal AI was deemed as more mature and ahead    of generative AI, says Ed Watal, founder and principal    at Intellibus. However, unlike generative AI,    causal AI has not yet found a mainstream use case and adoption    that tools like ChatGPT have provided over generative AI models    like GPT, he says.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read this article:<br \/>\n<a target=\"_blank\" href=\"https:\/\/www.informationweek.com\/machine-learning-ai\/causal-ai-ai-confesses-why-it-did-what-it-did-\" title=\"Causal AI: AI Confesses Why It Did What It Did - InformationWeek\" rel=\"noopener\">Causal AI: AI Confesses Why It Did What It Did - InformationWeek<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The holy grail in AI development is explainable AI, which is a means to reveal the decision-making processes that the AI model used to arrive at its output. In other words, we humans want to know why the AI did what it did before we staked our careers, lives, or businesses on its outputs. Causal AI requires models to explain their prediction <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/machine-learning\/causal-ai-ai-confesses-why-it-did-what-it-did-informationweek.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1231415],"tags":[],"class_list":["post-1067845","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1067845"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1067845"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1067845\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1067845"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1067845"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1067845"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}