Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EUs strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesnt mean its perfect.
In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies.
Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.
Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.
The commentary, concerns, and advice provided by those stakeholders has been considered by the EUs High-level expert group on artificial intelligence, who ultimately created four key documents that work as the basis for the EUs policy discussions on AI:
1. Ethics Guidelines for Trustworthy AI
2. Policy and Investment Recommendations for Trustworthy AI
3. Assessment List for Trustworthy AI
4. Sectoral Considerations on the Policy and Investment Recommendations
This article focuses on item number one: the EUs Ethics Guidelines for Trustworthy AI.
Published in 2019, this document lays out the barebones ethical concerns and best practices for the EU. While I wouldnt exactly call it a living document, it is supported by a continuously updated reporting system via the European AI Alliance initiative.
The Ethics Guidelines for Trustworthy AI provides a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.
Per the document:
AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
Neurals rating: poor
Human-in-the-loop, human-on-the-loop, and human-in-command are all wildly subjective approaches to AI governance that almost always rely on marketing strategies, corporate jargon, and disingenuous approaches to discussing how AI models work in order to appear efficacious.
Essentially, the human in the loop myth involves the idea that an AI system is safe as long as a human is ultimately responsible for pushing the button or authorizing the execution of a machine learning function that could potentially have an adverse effect on humans.
The problem: Human-in-the-loop relies on competent humans at every level of the decision-making process to ensure fairness. Unfortunately, studies show that humans are easily manipulated by machines.
Were also prone to ignore warnings whenever they become routine.
Think about it, whens the last time you read all the fine print on a website before agreeing to the terms presented? How often do you ignore the check engine light on your car or the time for an update alert on software when its still functioning properly?
Automating programs or services that affect human outcomes under the pretense that having a human in the loop is enough to prevent misalignment or misuse is, in this authors opinion, a feckless approach to regulation that gives businesses carte blanche to development harmful models as long as they tack on a human-in-the-loop requirement for usage.
As an example of what could go wrong, ProPublicas award-winning Machine Bias article laid bare the propensity for the human-in-the-loop paradigm to cause additional bias by demonstrating how AI used to recommend criminal sentences can perpetuate and amplify racism.
A solution: the EU should do away with the idea of creating proper oversight mechanisms and instead focus on creating policies that regulate the use and deployment of black box AI systems to prevent them from deployment in situations where human outcomes might be affected unless theres a human authority who can be held ultimately responsible.
Per the document:
AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
Neurals rating : needs work.
Without a definition of safe, the whole statement is fluff. Furthermore, accuracy is a malleable term in the AI world that almost always refers to arbitrary benchmarks that do not translate beyond laboratories.
A solution: the EU should set a bare minimum requirement that AI models deployed in Europe with the potential to affect human outcomes must demonstrate equality. An AI model that achieves lower reliability or accuracy on tasks involving minorities should be considered neither safe nor reliable.
Per the document:
Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
Neurals rating: good, but could be better.
Luckily, the General Data Protection Regulation (GDPR) does most of the heavy lifting here. However, the terms quality and integrity are highly subjective as is the term legitimised access.
A solution: the EU should define a standard where data must be obtained with consent and verified by humans to ensure the databases used to train models contain only data that is properly-labeled and used with the permission of the person or group who generated it.
Per the document:
The data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the systems capabilities and limitations.
Neurals rating: this is hot garbage.
Only a small percentage of AI models lend themselves to transparency. The majority of AI models in production today are black box systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse.
In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, wed have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine.
A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes.
Per the document:
Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
Neurals rating: poor.
In order for AI models to involve relevant stakeholders throughout their entire life circle theyd need to be trained on data sourced from diverse sources and developed by teams of diverse people. The reality is that STEM is dominated by white, straight, cis-males and there are myriad peer-reviewed studies demonstrating how that simple, demonstrable fact makes it almost impossible to produce many types of AI models without bias.
A solution: unless the EU has a method by which to solve the lack of minorities in STEM, it should instead focus on creating policies that prevent businesses and individuals from deploying AI models that produce different outcomes for minorities.
Per the document:
AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
Neurals rating: great. No notes!
Per the document:
Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.
Neurals rating: good, but could be better.
Theres currently no political consensus as to whos responsible when AI goes wrong. If the EUs airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm (they miss their flight and any opportunities stemming from their travel) or unnecessary mental anguish, theres nobody who can be held responsible for the mistake.
The employees following procedure based on the AIs flagging of a potential threat are just doing their jobs. And the developers who trained the systems are typically beyond reproach once their models go into production.
A solution: the EU should create a policy that specifically dictates that humans must always be held accountable when an AI system causes an unintended or erroneous outcome for another human. The EUs current policy and strategy encourages a blame the algorithm approach that benefits corporate interests more than citizen rights.
While the above commentary may be harsh, I believe the EUs AI strategy is a light leading the way. However, its obvious that the EUs desire to compete with the Silicon Valley innovation market in the AI sector has pushed the bar for human-centric technology a little further towards corporate interests than the unions other technology policy initiatives have.
The EU wouldnt sign off on an aircraft that was mathematically proven to crash more often if Black persons, women, or queer persons were passengers than it did when white men were onboard. It shouldnt allow AI developers to get away with deploying models that function that way either.
Read the original:
A critical review of the EU's 'Ethics Guidelines for Trustworthy AI' - TNW
- Chinese national arrested and charged with stealing AI trade secrets from Google - NPR - March 8th, 2024 [March 8th, 2024]
- President Biden Calls for Ban on AI Voice Impersonations During State of the Union - Variety - March 8th, 2024 [March 8th, 2024]
- Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog - March 8th, 2024 [March 8th, 2024]
- Broadcom Expects AI Demand to Help Offset Weakness Elsewhere - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia - March 8th, 2024 [March 8th, 2024]
- The Adams administration quietly hired its first AI czar. Who is he? - City & State New York - March 8th, 2024 [March 8th, 2024]
- AI likely to increase energy use and accelerate climate misinformation report - The Guardian - March 8th, 2024 [March 8th, 2024]
- This Artificial Intelligence (AI) Stock Could Double, and It Is Way Cheaper Than Nvidia - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Fake images made to show Trump with Black supporters highlight concerns around AI and elections - The Associated Press - March 8th, 2024 [March 8th, 2024]
- Artificial intelligence and illusions of understanding in scientific research - Nature.com - March 8th, 2024 [March 8th, 2024]
- Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Don't Give Your Business Data to AI Companies - Dark Reading - March 8th, 2024 [March 8th, 2024]
- NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal - March 8th, 2024 [March 8th, 2024]
- AI can be easily used to make fake election photos - report - BBC.com - March 8th, 2024 [March 8th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian - March 8th, 2024 [March 8th, 2024]
- Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News - March 8th, 2024 [March 8th, 2024]
- India plans 10,000-GPU sovereign AI supercomputer - The Register - March 8th, 2024 [March 8th, 2024]
- SAP enhances Datasphere and SAC for AI-driven transformation - CIO - March 8th, 2024 [March 8th, 2024]
- Jim Cramer names companies and sectors poised to rally on the AI wave - CNBC - March 8th, 2024 [March 8th, 2024]
- The job applicants shut out by AI: The interviewer sounded like Siri - The Guardian - March 8th, 2024 [March 8th, 2024]
- Microsoft confirms Surface and Windows AI event for March 21st - The Verge - March 8th, 2024 [March 8th, 2024]
- Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge - March 8th, 2024 [March 8th, 2024]
- A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub - March 8th, 2024 [March 8th, 2024]
- Palantir CEO Karp on TITAN, AI Warfare Technology - Bloomberg - March 8th, 2024 [March 8th, 2024]
- Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine - March 8th, 2024 [March 8th, 2024]
- 5 AI Stocks to Buy in March 2024, According to Analysts - TipRanks.com - TipRanks - March 8th, 2024 [March 8th, 2024]
- Wix's new AI chatbot builds websites in seconds based on prompts - The Verge - March 8th, 2024 [March 8th, 2024]
- Amid record high energy demand, America is running out of electricity - The Washington Post - March 8th, 2024 [March 8th, 2024]
- AI Crypto Tokens in 5 Minutes: What to Know and Where to Start - Inc. - February 26th, 2024 [February 26th, 2024]
- 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University - February 26th, 2024 [February 26th, 2024]
- AI is having a 1995 moment, analyst says - Business Insider - February 26th, 2024 [February 26th, 2024]
- Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter - February 26th, 2024 [February 26th, 2024]
- Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge - February 26th, 2024 [February 26th, 2024]
- Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge - February 26th, 2024 [February 26th, 2024]
- Google cut a deal with Reddit for AI training data - The Verge - February 26th, 2024 [February 26th, 2024]
- What's the point of Elon Musk's AI company? - The Verge - February 26th, 2024 [February 26th, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR - February 26th, 2024 [February 26th, 2024]
- Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft - February 26th, 2024 [February 26th, 2024]
- After Nvidia's latest blowout, here are 20 AI stocks expected to rise as much as 44% - Yahoo Finance - February 26th, 2024 [February 26th, 2024]
- 1 Exceptional AI Chip Stock Investors Need to Know About in 2024 - The Motley Fool - February 26th, 2024 [February 26th, 2024]
- Nvidia briefly hits $2 trillion valuation as AI frenzy grips Wall Street - Reuters - February 26th, 2024 [February 26th, 2024]
- AI Chatbots Can Guess Your Personal Information From What You ... - WIRED - October 18th, 2023 [October 18th, 2023]
- Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson - October 18th, 2023 [October 18th, 2023]
- Advancing policing through AI: Insights from the global law ... - Police News - October 18th, 2023 [October 18th, 2023]
- Hochul announces new SUNY, IBM investments in AI - Olean Times Herald - October 18th, 2023 [October 18th, 2023]
- Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge - October 18th, 2023 [October 18th, 2023]
- AI expands from MRFs to vehicles - Plastics Recycling Update - October 18th, 2023 [October 18th, 2023]
- AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American - October 18th, 2023 [October 18th, 2023]
- A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette - October 18th, 2023 [October 18th, 2023]
- Florida bar weighs whether lawyers using AI need client consent - Reuters - October 18th, 2023 [October 18th, 2023]
- Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire - October 18th, 2023 [October 18th, 2023]
- How AI could speed up scientific discoveries, from proteins to ... - NPR - October 18th, 2023 [October 18th, 2023]
- AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia - October 18th, 2023 [October 18th, 2023]
- Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine - October 18th, 2023 [October 18th, 2023]
- Stability AI releases StableStudio in latest push for open-source AI - The Verge - May 18th, 2023 [May 18th, 2023]
- Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool - May 18th, 2023 [May 18th, 2023]
- Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch - May 18th, 2023 [May 18th, 2023]
- Investing in Hippocratic AI - Andreessen Horowitz - May 18th, 2023 [May 18th, 2023]
- As Alphabet flexes its AI prowess, there's a 'new elephant in the room' for Google - MarketWatch - May 18th, 2023 [May 18th, 2023]
- The Boring Future of Generative AI | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- OpenAI readies new open-source AI model, The Information reports - Reuters.com - May 18th, 2023 [May 18th, 2023]
- What every CEO should know about generative AI - McKinsey - May 18th, 2023 [May 18th, 2023]
- AI creates images of the 'perfect' man and woman - Sky News - May 18th, 2023 [May 18th, 2023]
- Audit AI search tools now, before they skew research - Nature.com - May 18th, 2023 [May 18th, 2023]
- 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace - May 18th, 2023 [May 18th, 2023]
- Zoom makes a big bet on AI with investment in Anthropic - VentureBeat - May 18th, 2023 [May 18th, 2023]
- AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY - May 18th, 2023 [May 18th, 2023]
- Amazon is building an AI-powered conversational experience for ... - The Verge - May 18th, 2023 [May 18th, 2023]
- AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance - May 18th, 2023 [May 18th, 2023]
- AI Can Be Both Accurate and Transparent - HBR.org Daily - May 18th, 2023 [May 18th, 2023]
- You're Probably Underestimating AI Chatbots | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- AI presents political peril for 2024 with threat to mislead voters - The Associated Press - May 18th, 2023 [May 18th, 2023]
- We need AI to help us face the challenges of the future - The Guardian - May 18th, 2023 [May 18th, 2023]
- End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes - May 18th, 2023 [May 18th, 2023]
- Watch 44 million atoms simulated using AI and a supercomputer - New Scientist - May 18th, 2023 [May 18th, 2023]
- AI Is The New Electricity: Bank Of America Picks 20 Stocks To Cash In On ChatGPT Hype - Forbes - March 2nd, 2023 [March 2nd, 2023]
- Tech Giants Are Barreling Headfirst Into an AI Arms Race - February 20th, 2023 [February 20th, 2023]
- Bing's AI Is Threatening Users. That's No Laughing Matter - TIME - February 20th, 2023 [February 20th, 2023]