{"id":1024353,"date":"2021-08-14T01:30:41","date_gmt":"2021-08-14T05:30:41","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-weekly-the-road-to-ethical-adoption-of-ai-venturebeat\/"},"modified":"2021-08-14T01:30:41","modified_gmt":"2021-08-14T05:30:41","slug":"ai-weekly-the-road-to-ethical-adoption-of-ai-venturebeat","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-weekly-the-road-to-ethical-adoption-of-ai-venturebeat\/","title":{"rendered":"AI Weekly: The road to ethical adoption of AI &#8211; VentureBeat"},"content":{"rendered":"<p><p>The Transform Technology Summits start October 13th with Low-Code\/No Code: Enabling Enterprise Agility. Register now!<\/p>\n<p>As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines  the Organization for Economic Cooperation and Developments AI repository alone hosts more than 100 documents  that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.<\/p>\n<p>This is cause for alarm, because as the coauthors of a recent paper write, AIs impacts are hard to assess  especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in ethics shopping, ethics washing, or ethics shirking, in which they ameliorate their position with customers to build trust while minimizing accountability.<\/p>\n<p>The points are salient in light of efforts by European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building trustworthy AI. In a paper, digital ethics researcher Mark Ryan argues that AI isnt the type of thing that has the capacity to be trustworthy because the category of trust simply doesnt apply to AI. In fact, AI cant have the capacity to be trusted as long as it cant be held responsible for its actions, he argues.<\/p>\n<p>Trust is separate from risk analysis that is solely based on predictions based on past behavior, he explains. While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.<\/p>\n<p>Productizing AI responsibly means different things to different companies. For some, responsible implies adopting AI in a manner thats ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, responsible AI promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable  at least in theory.<\/p>\n<p>Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework thats not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the papers coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.<\/p>\n<p>Another paper  this from researchers at the Data & Society Research Institute and Princeton  posits algorithmic impact assessments as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.<\/p>\n<p>This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesnt necessarily measure harms and may even obscure them  real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects  rather than erodes  dignity.<\/p>\n<p>As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.<\/p>\n<p>For AI coverage, send news tips toKyle Wiggers  and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.<\/p>\n<p>Thanks for reading,<\/p>\n<p>Kyle Wiggers<\/p>\n<p>AI Staff Writer<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/venturebeat.com\/2021\/08\/13\/ai-weekly-the-road-to-ethical-adoption-of-ai\/\" title=\"AI Weekly: The road to ethical adoption of AI - VentureBeat\">AI Weekly: The road to ethical adoption of AI - VentureBeat<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The Transform Technology Summits start October 13th with Low-Code\/No Code: Enabling Enterprise Agility.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-weekly-the-road-to-ethical-adoption-of-ai-venturebeat\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-1024353","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1024353"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1024353"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1024353\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1024353"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1024353"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1024353"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}