Lynn Strongin Dodds looks at how the CFTC is exploring the best ways to leverage the opportunities and mitigate the risks AI is and will have on the derivatives markets.
It is no surprise that artificial intelligence is such a hot topic in derivatives circles. The benefits are not in doubt. Derivative data visualisation which leverages generative AI can help investors and traders understand complex market information, identify connections between various disparate factors in the financial market and assess potential risks. The question is how or if the technology should be regulated?
CFTC and AI
This issue is being probed by the Commodity Futures Trading Commission’s (CFTC) which in January issued a request for comment (RFC) on the different facets and impact of the technology. It included 20 questions and comments were due on April 24. At the time Chairman Rostin Behnam explained the process was to help the regulator “understand current and potential AI use cases and the associated potential risks to its jurisdictional markets and the larger financial system.”
Overall, market participants were of the view that further legislation was not necessary because the regulator already had all the tools in its box to deal with any ramifications. For example, in its RFC, the Futures Industry Association (FIA) urged the CFTC to adopt a technology neutral stance since the status quo already provided “the controls and oversight needed for the CFTC to promote and protect the integrity and resilience of our markets. We believe that the CFTC’s risk-based approach to developing a regulatory framework around outcomes and use cases, rather than the underlying technology, likely means that the CFTC’s existing rule sets already address perceived risks.”
The Securities Industry and Financial Markets Association (SIFMA) echoed these sentiments in its responses. It argued that the use of AI in financial services was not a new concept and that it had been around for decades to improve efficiency, accuracy, and analysis in many areas including trading, fraud detection, and investment analysis. “Market participants have risk-management frameworks that account for this, as they are built upon existing laws and regulations and are continuously uplifted to cover emerging technologies, including AI,” it added.
Risks
However, a report – Responsible Artificial Intelligence In Financial Markets: Opportunities, Risks & Recommendations – from the CFTC’s Technology Advisory Committee (TAC) released in May believed there was much more work to be done to ensure the technology was used securely and effective). The report acknowledged that “AI represents a potentially valuable tool for CFTC internal and external stakeholders,” and that the technology might “improve automated processes governing core functions, including risk management, surveillance, fraud detection, and the identification, execution, and back-testing of trading strategies.”
It warned stakeholders though they must also consider and respond to issues such as “responsible development, the quality of training data, the extent of involvement of humans in autonomous trading models, data privacy, auditing and oversight, and the breadth of internal talent” competent to oversee the implementation of AI. It noted these might currently be in short supply.
The report though did not only highlight the problems but also provided several use cases that could help derivatives market participants get a better grip on the technology. The first centred on trading and investment. It has been well documented that AI can be deployed to analyse data, identify trade execution strategy, predict asset prices, and engage in high-frequency trading. The flipside, it noted is data poisoning in the form of a cyber-attack or overfitting whereby a model too closely aligns to its training data and cannot be used for predictive purpose.
The second focused on AI ability to facilitate marketing, customer acquisition and service as well as provide customised investing advice. The negatives are so called AI hallucinations or an AI-generated response that contains false or misleading information which is presented as fact. Moreover, there may be privacy, explainability and transparency risks, as well as biased and discriminatory treatment of customers.
Next on the list was risk management. AI can go a long way in bolstering this function, particularly in margin model optimisation as well as developing and executing hedging strategies. In addition, it can monitor and adjust excess funds requirements, analyse publicly available data for key developments related to depositories or counterparties plus engage in collateral and liquidity optimisation. The dangers include data quality, AI hallucinations, critical infrastructure dependency, bias and discrimination, and explainability.
Last but not least was regulatory compliance. AI can be advantageous for market surveillance, recordkeeping, know your customer compliance, anti-money laundering/countering the financing of terrorism transaction monitoring. The problem areas are the same as in other use cases such as critical infrastructure dependence, AI hallucinations, bias and discrimination, explainability, and data privacy.
Recommendations
To rectify these issues and build a more robust structure, TAC proposed changes to CFTC structure and policy. This included the watchdog solicit opinions from stakeholders in the derivatives market on the “business functions and types of AI technologies most prevalent within the sector.” It also advised that information should be used to help define and adopt an AI Risk Management Framework (RMF) for the sector, aligned with the guidelines and governance aspects of the National Institute of Standards and Technology’s (NIST).
The report also called on the CFTC to create an inventory of existing regulations that would impact the use of AI in the derivatives market. This could help “develop a gap analysis of the potential risks associated with AI systems to determine compliance relative to further opportunities for dialogue, and potential clarifying staff guidance or potential rulemaking,” it added.
In addition, TAC said the CFTC needed to forge interagency cooperation and alignment on AI issues with different authorities including the Securities and Exchange Commission, the Department of the Treasury, and “other agencies interested in the financial stability of markets.”
The final recommendation was to build a pipeline of AI experts to ensure the requisite resources for responsible engagement by internal and external stakeholders. This means involving staff as both observers and potential participants in ongoing domestic and international dialogues around AI, and “where possible, establish budget supplements to develop the internal capacity of agency professionals around necessary technical expertise to support the agency’s endeavours in emerging and evolving technologies,” it added.