The CFTC Consults on the Use of Artificial Intelligence

What Is It About

The Commodity Futures Trading Commission (CFTC) in the US has issued a Request for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions.

Why It's Important

Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house or is dependent on a third party, Compliance has a critical role in ensuring that AI is appropriately governed and can be evidenced to regulators. This RFC provides important guidance and perspectives for Compliance.

Key Takeaways

While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements. 

Introduction

The US’s Commodity Futures Trading Commission (CFTC) issued (click here) aRequest for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions. 

The request also seeks comment on the risks of AI, including risks related to market manipulation and fraud, governance, explainability, data quality, concentration, bias, privacy and confidentiality and customer protection.

Several AI themes relevant to Compliance within the RFC include:

  1. Books and Record Keeping / Comm’s Surveillance. Use of AI to proactively search for risks in records and recordings;
  2. Trading – directional, hedging, or speculative. Use of AI to design strategies or make decisions related to specific trades;
  3. KYC customer validation. Use of AI for AML and fraud monitoring purposes;
  4. Trade Surveillance. Use of AI to enhance monitoring of specific risks such as spoofing, wash trades, and “marking-the-close” trading; and
  5. AI Third Party providers. How are firms performing due diligence to evaluate the risks posed by third-party providers prior to adopting third-party AI technologies.

icon_target RegTrail Insights

Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house to, for example build an algorithm to trade a specific asset class, or is dependent on a third party for example using machine learning techniques within a third party communication surveillance system to identify risk behaviours in written communications, Compliance has a role in ensuring that AI is appropriately governed and can be evidenced to regulators. 

AI Governance Frameworks. In the accompanying statement to the RFC (click here), CFTC Commissioner Kristin Johnson references leading AI Governance Frameworks developed by the private sector, such as SalesForce (click here), that provide guidelines for responsible development of generative AI. 

Reliance on Third-party development of AI. Commissioner Johnson noted in her statement that firms should have the skills, expertise, and experience to develop, test, deploy, monitor, and oversee controls over the AI and ML that a firm utilises. Specifically, she quotes a recent IOSCO report on the use of AI and ML (click here) as follows: 

“Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.”

Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.”

When defining an AI policy, there are several aspects Compliance teams may wish to consider. ISACA, a global IT governance association, recently published an article entitled ‘Key Considerations for Developing Organizational Generative AI Policies’ (click here). Specific steps they recommend following include:

  • Understand generative AI;
  • Assess organizational needs;
  • Survey the regulatory landscape;
  • Conduct a risk assessment;
  • Confirm purpose and scope of the acceptable usage of generative AI;
  • Examine existing IT and InfoSec policies;
  • Engage stakeholders in policy development;
  • Prepare for internal and external communication;
  • Understand your technical environment and requirements; and
  • Utilize a governance framework (see below). 

Several leading AI frameworks and position papers Compliance teams may wish to review further when defining their own AI governance framework include: 

  1. U.S. National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (AI RMF 1.0) (click here). Framework built in collaboration with the private and public sectors and intended for voluntary use to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
  2. OECD AI system lifecycle framework (click here). A comprehensive framework that covers the entire lifecycle of an AI system. It consists of five phases: (1) design and development, (2) testing and validation, (3) deployment and operation, (4) monitoring and maintenance, and (5) decommissioning.
  3. AIGA AI Governance Framework (click here). Published by the University of Turku (Finland), this is a practice-oriented framework for implementing responsible AI which enables firms to adopt a systematic approach for AI governance that covers the entire process of AI system development and operations. The AI governance tasks are mapped to the OECD’s AI system lifecycle framework.
  4. Singapore’s Model AI Governance Framework (click here). A framework that converts relevant ethical principles into practices that can be implemented throughout an AI deployment process.

More broadly, AI Regulation is currently nascent and asymmetrical across the USA, Europe, and Asia. Regulatory developments across each region include: 

  • Europe. In December 2023, the EU recently reached a provisional agreement on the Artificial Intelligence Act (click here) in December 2023. 
  • USA. In October 2023, the US issued a ‘Blueprint for an AI Bill of Rights’ (click here) which is an executive order issued by the US president and focuses on the safe, secure, and trustworthy development and use of AI. 
  • China. In March 2022, China issued its first regulation ‘Algorithm Recommendation Regulation’. More recently in August 2023, it’s ‘Generative AI Regulation’ came into force (click here for full summary of China AI regulations).

While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements. 

We provide a summary of the key questions included in the CFTC RFC below. Compliance teams are invited to review these questions and, where appropriate, benchmark the underlying concepts against their current governance frameworks. Those firms operating in the USA under CFTC jurisdiction may also wish to participate and respond to the RFC. 

Thanks for your interest in our content.
Enjoy the read!

Introduction

The US’s Commodity Futures Trading Commission (CFTC) issued (click here) aRequest for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions. 

The request also seeks comment on the risks of AI, including risks related to market manipulation and fraud, governance, explainability, data quality, concentration, bias, privacy and confidentiality and customer protection.

Several AI themes relevant to Compliance within the RFC include:

  1. Books and Record Keeping / Comm’s Surveillance. Use of AI to proactively search for risks in records and recordings;
  2. Trading – directional, hedging, or speculative. Use of AI to design strategies or make decisions related to specific trades;
  3. KYC customer validation. Use of AI for AML and fraud monitoring purposes;
  4. Trade Surveillance. Use of AI to enhance monitoring of specific risks such as spoofing, wash trades, and “marking-the-close” trading; and
  5. AI Third Party providers. How are firms performing due diligence to evaluate the risks posed by third-party providers prior to adopting third-party AI technologies.

icon_target RegTrail Insights

Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house to, for example build an algorithm to trade a specific asset class, or is dependent on a third party for example using machine learning techniques within a third party communication surveillance system to identify risk behaviours in written communications, Compliance has a role in ensuring that AI is appropriately governed and can be evidenced to regulators. 

AI Governance Frameworks. In the accompanying statement to the RFC (click here), CFTC Commissioner Kristin Johnson references leading AI Governance Frameworks developed by the private sector, such as SalesForce (click here), that provide guidelines for responsible development of generative AI. 

Reliance on Third-party development of AI. Commissioner Johnson noted in her statement that firms should have the skills, expertise, and experience to develop, test, deploy, monitor, and oversee controls over the AI and ML that a firm utilises. Specifically, she quotes a recent IOSCO report on the use of AI and ML (click here) as follows: 

“Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.”

Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.”

When defining an AI policy, there are several aspects Compliance teams may wish to consider. ISACA, a global IT governance association, recently published an article entitled ‘Key Considerations for Developing Organizational Generative AI Policies’ (click here). Specific steps they recommend following include:

  • Understand generative AI;
  • Assess organizational needs;
  • Survey the regulatory landscape;
  • Conduct a risk assessment;
  • Confirm purpose and scope of the acceptable usage of generative AI;
  • Examine existing IT and InfoSec policies;
  • Engage stakeholders in policy development;
  • Prepare for internal and external communication;
  • Understand your technical environment and requirements; and
  • Utilize a governance framework (see below). 

Several leading AI frameworks and position papers Compliance teams may wish to review further when defining their own AI governance framework include: 

  1. U.S. National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (AI RMF 1.0) (click here). Framework built in collaboration with the private and public sectors and intended for voluntary use to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
  2. OECD AI system lifecycle framework (click here). A comprehensive framework that covers the entire lifecycle of an AI system. It consists of five phases: (1) design and development, (2) testing and validation, (3) deployment and operation, (4) monitoring and maintenance, and (5) decommissioning.
  3. AIGA AI Governance Framework (click here). Published by the University of Turku (Finland), this is a practice-oriented framework for implementing responsible AI which enables firms to adopt a systematic approach for AI governance that covers the entire process of AI system development and operations. The AI governance tasks are mapped to the OECD’s AI system lifecycle framework.
  4. Singapore’s Model AI Governance Framework (click here). A framework that converts relevant ethical principles into practices that can be implemented throughout an AI deployment process.

More broadly, AI Regulation is currently nascent and asymmetrical across the USA, Europe, and Asia. Regulatory developments across each region include: 

  • Europe. In December 2023, the EU recently reached a provisional agreement on the Artificial Intelligence Act (click here) in December 2023. 
  • USA. In October 2023, the US issued a ‘Blueprint for an AI Bill of Rights’ (click here) which is an executive order issued by the US president and focuses on the safe, secure, and trustworthy development and use of AI. 
  • China. In March 2022, China issued its first regulation ‘Algorithm Recommendation Regulation’. More recently in August 2023, it’s ‘Generative AI Regulation’ came into force (click here for full summary of China AI regulations).

While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements. 

We provide a summary of the key questions included in the CFTC RFC below. Compliance teams are invited to review these questions and, where appropriate, benchmark the underlying concepts against their current governance frameworks. Those firms operating in the USA under CFTC jurisdiction may also wish to participate and respond to the RFC. 

Compliance Considerations

Below is an extract summary of CFTC questions from the AI RFC covering the following themes: 

  • Question 2. General Uses;
  • Question 6. AI and third-party service providers;
  • Question 7. Governance of AI Uses;
  • Question 9. Governance;
  • Question 13. Market Manipulation and Fraud; and
  • Question 18. Third-party service providers.

While some questions are drafted to provoke industry conversation on emerging AI which is still in an exploratory stage, many questions are targeted at AI that is already implemented and live.

Firms should reasonably expect that if a regulator is requesting this type of information through an RFC that these topics represent areas of heightened interest. These areas should be considered and addressed within an appropriate policy framework where relevant i.e. firms should consider taking a proactive policy framework approach regarding this topic.

Question 2. General Uses.

a. Trading.
  • In addition to market intelligence, analytics, data processing, and risk evaluation, is AI being used to design strategies or make decisions related to specific trades (directional, hedging or speculative)?
  • How much autonomy is given to AI to identify a trade and place it in the market, with or without human supervision?
  • Is AI being used to mitigate human error in the trading process, or to otherwise “quality control” or validate trading?
  • How does this differ from traditional trading algorithms?
  • How often are AI-driven trading strategies updated?
  • Is use of AI more prevalent for the trading of certain products or markets or by certain types of entities? If so, why?
  • What are the measures for evaluating success when using AI?
b. Data Processing and Analytics.
  • What data processing and analytic tasks have been supported by AI?
  • To what extent do AI-driven analytics inform or supplant human action?
  • Have training and use protocols been developed and/or applied in conjunction with the application of AI analytics?
  • How is AI used to monitor for anomalies or issues with data quality?
  • If analytical errors are discovered, what steps are taken to evaluate and cure those errors?
  • What monitoring is in place to identify data processing errors made by an AI-based system?

c. Compliance.

Compliance is broadly interpreted here and includes, but is not limited to, know-your-customer (KYC customer validation), anti-money laundering, anti-fraud, trade documentation and regulatory reporting.

  • How is AI being used in compliance?
  • Are CFTC-regulated entities using AI to comply with specific CFTC requirements, such as in the context of swap dealer business conduct standards?
  • An additional and important subset of compliance is surveillance, which would include identifying market manipulation, including, but not limited to, spoofing, wash trades, and “marking-the-close” trading. For self- regulatory organizations (“SROs”), please explain any ways in which AI is being adopted as a part of surveillance and oversight of members.

d. Books and records.

CFTC-regulated entities are required to maintain in a readily producible fashion a variety of records, including trade histories, audio recordings, and digital communications.

  • Is AI being used to organize, validate or search required records?
  • Is AI being used to proactively search for risks in records and recordings?
  • Alternatively, is AI being used to search for gaps in records or broken records for compliance or other purposes?

e. Systems development.

AI-based tools are being increasingly used by software developers to enhance productivity, particularly for manual and repetitive tasks.

  • Is AI being used by software developers in CFTC-regulated markets to assist in the development of internal applications and services?
  • Is AI being used to assist in quality assurance?

f. Cybersecurity and resilience.

  • How, if at all, is AI being used to assess the cyber vulnerabilities of systems or data?
  • To the extent that firms have outsourced activities or data management to third party service providers, has AI been employed to evaluate the cybersecurity and resilience of these systems as well?

 

Question 6. AI and third-party service providers. 

  • To what extent are third-party service providers relied upon for the provision of AI services that support the uses described in question 2, above?
  • To the extent that AI supports the activities described in question 2, which of them tend to be performed by in-house staff rather than third-party service providers, and why?
  • Are AI technologies being developed within CFTC-regulated entities as proprietary technology? If not, are CFTC-regulated entities acquiring technologies from third-party service providers?
  • What specific third-party AI-based software are participants in CFTC-regulated markets adopting?
  • What challenges may CFTC-regulated entities face when attempting to manage, update, or deconstruct the decisions or analysis made by third-party created software or technology?

 

Question 7. Governance of AI Uses. 

  • How are firms tracking the uses being made of AI, both by in-house operations and by third-party service providers relied upon by firms?
  • How is accountability for AI use assigned?
  • Is the use of AI audited for accuracy and safety?
  • How frequently are AI systems updated?

 

Question 9. Governance. 

Given the unique challenges associated with identifying and managing AI risks, concerns have been raised regarding firms’ ability to manage such challenges through existing governance processes. 

  • Have CFTC-regulated entities modified their governance structures to specifically address AI? If so, how?
  • Do these changes include having designated senior management responsible for the oversight of the development, testing, deployment, monitoring, and controls of AI?
  • Do structures appoint a human to be “in the loop” to prevent cascading failures driven by AI?
  • Is any particular AI-specific risk management framework, such as that published by NIST, being used to guide such changes? In the event that the AI tool is procured or operated by a third-party, what additional challenges to governance have been identified, and are they capable of being fully addressed through in-house governance measures?

 

Question 13. Market Manipulation and Fraud

  • For firms that integrate AI into trading decision-making, describe the policies and practices adopted to prevent the use of AI-driven strategies in schemes designed to manipulate the market.
  • Describe efforts to use AI-based market supervisory technologies to detect market manipulation or fraud.

 

Question 18. Third-party service providers

  • Are there any risks specifically associated with using AI technologies created by third party providers?
  • What efforts are users of third-party AI technology taking to understand and mitigate these risks?
  • What due diligence procedures are in place to evaluate the risks posed by third-party providers prior to adopting third-party AI technologies?
  • What disclosures should be required regarding a firm’s use of third-party providers for AI services?

 

Want to read more?