New Draft Rules on the Use of Artificial Intelligence – Lexology

Posted: May 18, 2021 at 4:23 am

On 21 April 2021, the European Commission published draft regulations (AI Regulations) governing the use of artificial intelligence (AI). The European Parliament and the member states have not yet adopted these proposed AI Regulations.

The proposed AI Regulations:

In more detail

The European Commission's proposed AI Regulations are the first attempt the world has seen at creating a uniform legal framework governing the use, development and marketing of AI. They will likely have a resounding impact on all businesses that use AI for years to come.

Scope

The AI Regulations will apply to the following:

Timing

The AI Regulations will become effective 20 days after publication in the Official Journal. They will then be need to be implemented within 24 months, with some provisions going into effect sooner. The long implementation period increases the risk that some provisions will become irrelevant or moot because of technological developments.

Risk-based approach

In its 2020 white paper, the European Commission proposed splitting the AI ecosystem into two general categories: high risk or low risk. The European Commission's new graded system is more nuanced and likely to ensure a more targeted approach, since the level of compliance requirements matches the risk level of a specific use case.

The new AI Regulations follow a risk-based approach and differentiate between the following: (i) prohibited AI systems whose use is considered unacceptable and that contravene union values (e.g., by violating fundamental rights); (ii) uses of AI that create a high risk; (iii) those which create a limited risk (e.g., where there is a risk of manipulation, for instance via the use of chatbots)); and (iv) uses of AI that create minimal risk.

Under the requirements of the new AI Regulations, the greater the potential of algorithmic systems to cause harm, the more far-reaching the intervention. Limited risk uses of AI face minimal transparency requirements and minimal risk uses can be developed and used without additional legal obligations. However, makers of "limited" or "minimal" risk AI systems will be encouraged to adopt non-legally binding codes of conduct. The "high risk" uses will be subject to specific regulatory requirements before and after launching into the market (e.g., ensuring the quality of data sets used to train AI systems, applying a level of human oversight, creating records to enable compliance checks and providing relevant information to users). Some obligations may also apply to distributors, importers, users or any other third parties, thus affecting the entire AI supply chain.

Enforcement

Member states will be responsible for enforcing these regulations. Penalties for noncompliance are up to 6% of global annual turnover or EUR 30 million, whichever is greater.

Criticism of exemptions for law enforcement

Overly broad exemptions for law enforcement's use of remote biometric surveillance have been the target of criticism. There are also concerns that the AI Regulations do not go far enough to address the risk of possible discrimination by AI systems.

Although the AI Regulations detail prohibited AI practices, some find there are too many problematic exceptions and caveats. For example, the AI Regulations create an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted individual or preventing a terror attack. In response, some EU lawmakers and digital rights groups want the carve-out removed due to fears authorities may use it to justify the widespread future use of the technology, which can be intrusive and inaccurate.

Support

The AI Regulations include measures supporting innovation such as setting up regulatory sandboxes. These facilitate the development, testing and validation of innovative AI systems for a limited time before their placement on the market, under the direct supervision and guidance of the competent authorities to ensure compliance.

Database

According to the AI Regulations, the European Commission will be responsible for setting up and maintaining a database for high-risk AI practices (Article 60). The database will contain data on all stand-alone AI systems considered high-risk. To ensure transparency, all information processed in the database will be accessible to the public.

It remains to be seen whether the European Commission will extend this database to low-risk practices to increase transparency and enhance the possibility of supervision for practices that are not high-risk initially but may become so at a later stage.

Employment-specific observations

1. High-risk

AI practices that involve employment, worker management and access to self-employment are considered high-risk. These high-risk systems specifically include the following AI systems:

2. Biases

According to the AI Regulations, the training, validation and testing of data sets must be subject to appropriate data governance and management practices, including in relation to possible biases. Providers of continuously learning high-risk AI systems must ensure that possibly biased outputs are equipped with proper mitigation measures if they will be used as input in future operations (feedback loops).

However, the AI Regulations are unclear on how AI systems will be tested for possible biases, specifically whether the benchmark will be equality in opportunity or equality in outcomes. Companies should consider how these systems might affect individuals with disabilities and individuals at the intersection of multiple social groups.

3. Processing of special categories of personal data to mitigate biases is permissible

The AI Regulations carve out an exception allowing AI providers to process special categories of personal data if it is strictly necessary to ensure bias monitoring, detection and correction. However, AI providers processing this personal data are still subject to appropriate safeguards for the fundamental rights and freedoms of natural persons (e.g., technical limitations on the reuse and use of state-of-the-art security and privacy-preserving measures such as pseudonymization or encryption). It remains to be seen whether individuals will sufficiently trust these systems to provide them with their sensitive personal data.

4. Human oversight

High-risk AI systems must be capable of human oversight. Individuals tasked with oversight must:

As indicated in our recent Trust Continuum report, this will require substantial involvement from the human decision-maker (in practice, often an individual from HR), which proves to be challenging for most companies.

Beyond the EU

We have already noted the potential for extra-territorial effect of the AI Regulations. But many AI systems will not be caught if they have no EU nexus. The EU is, as it often is, at the vanguard of governmental intervention into protection of human rights - it is the first to lay down a clear marker of expectations around use of AI. But the issue is under review in countries across the world. In the UK, the Information Commissioner has published Guidance on AI and data protection. In the US, a number of states have introduced draft legislation governing use of AI. None have quite the grand plan feel of the EU's AI Regulations, but there will certainly be more to follow.

Read the original:

New Draft Rules on the Use of Artificial Intelligence - Lexology

Related Posts