{"id":1075281,"date":"2023-12-27T02:38:19","date_gmt":"2023-12-27T07:38:19","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-the-good-men-project\/"},"modified":"2024-08-18T12:48:24","modified_gmt":"2024-08-18T16:48:24","slug":"forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-the-good-men-project","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-the-good-men-project.php","title":{"rendered":"Forget Dystopian Scenarios  AI Is Pervasive Today, and the Risks Are Often Hidden &#8211; The Good Men Project"},"content":{"rendered":"<p><p>    ByAnjana Susarla, Michigan State    University  <\/p>\n<p>    The turmoil at ChatGPT-maker OpenAI, bookended by the board of    directors     firing high-profile CEO Sam Altman on Nov. 17, 2023, and        rehiring him just four days later, has put a spotlight on    artificial intelligence safety and concerns about the rapid    development of artificial general intelligence, or AGI. AGI is    loosely defined as     human-level intelligence across a range of tasks.  <\/p>\n<p>    The OpenAI board stated that     Altmans termination was for lack of candor, but    speculation has centered on a rift between Altman and members    of the board over concerns that OpenAIs remarkable growth     products such as ChatGPT and Dall-E have     acquired hundreds of millions of users worldwide  has        hindered the companys ability to focus on     catastrophic risks posed by AGI.  <\/p>\n<p>    OpenAIs goal of developing AGI has become entwined with the    idea of     AI acquiring superintelligent capabilities and the need to    safeguard against the technology being misused or going rogue.    But for now, AGI and its attendant risks are speculative.    Task-specific forms of AI, meanwhile, are very real, have    become widespread and often fly under the radar.  <\/p>\n<p>    As a     researcher of information systems and responsible AI, I    study how these everyday algorithms work  and how they can    harm people.  <\/p>\n<p>    AI plays a visible part in many peoples daily lives, from face    recognition unlocking your phone to speech recognition powering    your digital assistant. It also plays roles you might be    vaguely aware of  for example, shaping your social media and    online shopping sessions, guiding your video-watching choices    and matching    you with a driver in a ride-sharing service.  <\/p>\n<p>    AI also affects your life in ways that might completely escape    your notice. If youre applying for a job,     many employers use AI in the hiring process. Your bosses    might be using it to identify employees     who are likely to quit. If youre applying for a loan, odds    are your bank is using AI to decide whether to grant it. If    youre being treated for a medical condition, your health care    providers might use it to assess your    medical images. And if you know someone caught up in the    criminal justice system, AI could well play a role in     determining the course of their life.  <\/p>\n<p>    Many of the AI systems that fly under the radar have biases    that can cause harm. For example, machine learning methods use    inductive    logic, which starts with a set of premises, to generalize    patterns from training data. A machine learning-based     resume screening tool was found to be biased against women    because the training data reflected past practices when most    resumes were submitted by men.  <\/p>\n<p>    The use of predictive methods in areas ranging from health care    to child welfare could exhibit biases such as cohort    bias that lead to unequal risk assessments across different    groups in society. Even when legal practices prohibit    discrimination based on attributes such as race and gender     for example, in consumer lending  proxy discrimination can    still occur. This happens when algorithmic decision-making    models do not use characteristics that are legally protected,    such as race, and instead use characteristics that are highly    correlated or connected with the legally protected    characteristic, like neighborhood. Studies have found that    risk-equivalent Black and Latino borrowers pay    significantly higher interest rates on government-sponsored    enterprise securitized and Federal Housing Authority insured    loans than white borrowers.  <\/p>\n<p>    Another form of bias occurs when decision-makers use an    algorithm differently from how the algorithms designers    intended. In a well-known example, a neural network learned to        associate asthma with a lower risk of death from pneumonia.    This was because asthmatics with pneumonia are traditionally    given more aggressive treatment that lowers their mortality    risk compared to the overall population. However, if the    outcome from    such a neural network is used in hospital bed allocation,    then those with asthma and admitted with pneumonia would be    dangerously deprioritized.  <\/p>\n<p>    Biases from algorithms can also result from complex societal    feedback loops. For example, when predicting recidivism,    authorities attempt to predict which people convicted of crimes    are likely to    commit crimes again. But the data used to train predictive    algorithms is actually about who is likely to get re-arrested.  <\/p>\n<p>    The Biden administrations recent     executive order and     enforcement efforts by federal agencies such as the Federal    Trade Commission are the first steps in recognizing and    safeguarding against algorithmic harms.  <\/p>\n<p>    And though     large language models, such as GPT-3 that powers ChatGPT,    and multimodal large    language models, such as GPT-4, are steps on the road    toward artificial general intelligence, they are also    algorithms people are increasingly using in school, work and    daily life. Its important to consider the biases that result    from widespread use of large language models.  <\/p>\n<p>    For example, these models could exhibit biases resulting from    negative    stereotyping involving gender, race or religion, as well as    biases in representation of minorities    and disabled people. As these models demonstrate the    ability to outperform humans    on tests such as the bar exam, I believe that they require    greater scrutiny to ensure that AI-augmented work conforms to    standards    of transparency, accuracy and source crediting, and    that    stakeholders have the authority to enforce such standards.  <\/p>\n<p>    Ultimately, who wins and loses from large-scale deployment of    AI may not be about rogue superintelligence, but about    understanding who is vulnerable when algorithmic    decision-making is ubiquitous.  <\/p>\n<p>    Anjana Susarla, Professor of    Information Systems, Michigan State    University  <\/p>\n<p>    This article is republished from The Conversation under a Creative Commons    license. Read the original article.  <\/p>\n<p>    ***  <\/p>\n<\/p>\n<p>    Premium Members get to view The Good Men Project with NO ADS.    Need more info? A    complete list of benefits is here.  <\/p>\n<\/p>\n<p>    Photo credit: iStockPhoto.com  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/goodmenproject.com\/featured-content\/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden\" title=\"Forget Dystopian Scenarios  AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project\">Forget Dystopian Scenarios  AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> ByAnjana Susarla, Michigan State University The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-the-good-men-project.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234933],"tags":[],"class_list":["post-1075281","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075281"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1075281"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075281\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1075281"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1075281"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1075281"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}