The US’s Commodity Futures Trading Commission (CFTC) issued (click here) aRequest for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions.
The request also seeks comment on the risks of AI, including risks related to market manipulation and fraud, governance, explainability, data quality, concentration, bias, privacy and confidentiality and customer protection.
Several AI themes relevant to Compliance within the RFC include:
- Books and Record Keeping / Comm’s Surveillance. Use of AI to proactively search for risks in records and recordings;
- Trading – directional, hedging, or speculative. Use of AI to design strategies or make decisions related to specific trades;
- KYC customer validation. Use of AI for AML and fraud monitoring purposes;
- Trade Surveillance. Use of AI to enhance monitoring of specific risks such as spoofing, wash trades, and “marking-the-close” trading; and
- AI Third Party providers. How are firms performing due diligence to evaluate the risks posed by third-party providers prior to adopting third-party AI technologies.
RegTrail Insights
Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house to, for example build an algorithm to trade a specific asset class, or is dependent on a third party for example using machine learning techniques within a third party communication surveillance system to identify risk behaviours in written communications, Compliance has a role in ensuring that AI is appropriately governed and can be evidenced to regulators.
AI Governance Frameworks. In the accompanying statement to the RFC (click here), CFTC Commissioner Kristin Johnson references leading AI Governance Frameworks developed by the private sector, such as SalesForce (click here), that provide guidelines for responsible development of generative AI.
Reliance on Third-party development of AI. Commissioner Johnson noted in her statement that firms should have the skills, expertise, and experience to develop, test, deploy, monitor, and oversee controls over the AI and ML that a firm utilises. Specifically, she quotes a recent IOSCO report on the use of AI and ML (click here) as follows:
“Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.”
Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.”
When defining an AI policy, there are several aspects Compliance teams may wish to consider. ISACA, a global IT governance association, recently published an article entitled ‘Key Considerations for Developing Organizational Generative AI Policies’ (click here). Specific steps they recommend following include:
- Understand generative AI;
- Assess organizational needs;
- Survey the regulatory landscape;
- Conduct a risk assessment;
- Confirm purpose and scope of the acceptable usage of generative AI;
- Examine existing IT and InfoSec policies;
- Engage stakeholders in policy development;
- Prepare for internal and external communication;
- Understand your technical environment and requirements; and
- Utilize a governance framework (see below).
Several leading AI frameworks and position papers Compliance teams may wish to review further when defining their own AI governance framework include:
- U.S. National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (AI RMF 1.0) (click here). Framework built in collaboration with the private and public sectors and intended for voluntary use to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
- OECD AI system lifecycle framework (click here). A comprehensive framework that covers the entire lifecycle of an AI system. It consists of five phases: (1) design and development, (2) testing and validation, (3) deployment and operation, (4) monitoring and maintenance, and (5) decommissioning.
- AIGA AI Governance Framework (click here). Published by the University of Turku (Finland), this is a practice-oriented framework for implementing responsible AI which enables firms to adopt a systematic approach for AI governance that covers the entire process of AI system development and operations. The AI governance tasks are mapped to the OECD’s AI system lifecycle framework.
- Singapore’s Model AI Governance Framework (click here). A framework that converts relevant ethical principles into practices that can be implemented throughout an AI deployment process.
More broadly, AI Regulation is currently nascent and asymmetrical across the USA, Europe, and Asia. Regulatory developments across each region include:
- Europe. In December 2023, the EU recently reached a provisional agreement on the Artificial Intelligence Act (click here) in December 2023.
- USA. In October 2023, the US issued a ‘Blueprint for an AI Bill of Rights’ (click here) which is an executive order issued by the US president and focuses on the safe, secure, and trustworthy development and use of AI.
- China. In March 2022, China issued its first regulation ‘Algorithm Recommendation Regulation’. More recently in August 2023, it’s ‘Generative AI Regulation’ came into force (click here for full summary of China AI regulations).
While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements.
We provide a summary of the key questions included in the CFTC RFC below. Compliance teams are invited to review these questions and, where appropriate, benchmark the underlying concepts against their current governance frameworks. Those firms operating in the USA under CFTC jurisdiction may also wish to participate and respond to the RFC.