IOSCO’s proposed guidance for regulation on the use of artificial intelligence and machine learning – Lexology

The International Organization of Securities Commissions (IOSCO) has recently issued a consultation report to propose guidance for members in regulating the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers (collectively, Firms). The consultation period will end on 26 October 2020. IOSCO will issue final guidance to its members based on the consultation conclusions. It is likely that IOSCO members, including the Hong Kong Securities and Futures Commission, will follow the guidance and in due course adopt IOSCOs suggested measures into their regulatory framework. The consultation report also includes a discussion of the existing regulatory framework for the use of technologies in different financial markets for reference purposes.

The consultation report describes AI as a combination of mass data, sufficient computing resources and machine learning, and describes ML as a method of designing a sequence of actions to solve a problem, which optimise automatically through experience, with or without human intervention. IOSCO noted the increasing use of AI and ML by market intermediaries in the provision of advisory and support services, risk management, client identification and monitoring, selection of trading algorithms and asset management/ portfolio management. For asset managers, AI and ML techniques may be deployed to optimise portfolio management, complement human investment decision-making processes by suggesting investment recommendations and improve internal research capabilities, as well as for back office functions.

Through its engagement with financial markets, IOSCO identified several risk areas involving the use of AI and ML and proposed six measures to regulate Firms use of AI and ML. For Hong Kong licensed corporations, these measures will be familiar from the electronic trading rules in Chapter 18 and Schedule 7 to the Code of Conduct for Persons Licensed By or Registered with the Securities and Futures Commission. However, the IOSCO proposals are much broader than purely electronic trading, as they relate to the use of AI and ML generally throughout a Firms business.

1.

Governance and oversight

Firms should designate senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. They should also document their internal governance framework, with clear lines of accountability. Senior management should designate an appropriately senior individual (or groups of individuals), with the relevant skill set and knowledge to sign off on initial deployment and substantial updates of the technology.

2.

Algorithm development, testing and ongoing monitoring

Firms should adequately test and monitor the algorithms they use to validate the results of AI and ML techniques used on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI and ML behave as expected in stressed and unstressed market conditions and operate in a way that complies with regulatory obligations.

3.

Data quality and bias

Firms should have appropriate controls in place to ensure that the data on which the performance of AI and ML is dependent is of sufficient quality to prevent biases and is sufficiently broad for a well-founded application of AI and ML.

4.

Transparency and explainability

Firms should disclose meaningful information to customers and clients around their use of AI and ML that impact client outcomes.

5.

Outsourcing

Firms should understand their reliance upon and manage their relationship with third party service providers, including ongoing oversight and monitoring of the performance of the service providers. To ensure adequate accountability, Firms should have a service level agreement and contract in place with each service provider clarifying the scope of the outsourced functions and the responsibilities of the service provider. This agreement should contain clear performance indicators and should also clearly determine sanctions for poor performance.

6.

Ethical concerns

Firms should have adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that they utilise. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third party service provider, including on the level of knowledge, expertise and experience of the service provider. This measure sets a high standard for compliance and risk management functions that may be hard to achieve, particularly where an algorithm develops organically in ways that may not have been originally anticipated.

What does this mean for financial intermediaries?

Subject to the consultation conclusions, the six suggested measures will become fundamental principles for IOSCOs members to use in formulating their AI and ML regulations in the future. When designing or developing businesses involving the use of AI and ML, financial intermediaries are encouraged to use the suggested measures as a guide for their compliance infrastructure, in addition to compliance with local regulatory requirements.

Go here to see the original:

IOSCO's proposed guidance for regulation on the use of artificial intelligence and machine learning - Lexology

Related Posts

Comments are closed.