The April 2020 spotlight review: Emerging themes and challenges in algorithmic trading and machine learning, (from the private sector body that sprang from the 2015 Fair and Effective Markets Review ‘FEMR’), highlights among other things, some of the risk management concerns and regulatory challenges that face market participants.
Author Rupak Ghose, the former Credit Suisse and ICAP/Nex corporate strategy, technology and data veteran and adviser to FMSB, pins down with readable clarity some of the key issues that buy-side and sell-side algo and ML users need to be thinking about. The first section enumerates eight key factors that arise from the need for model risk management, an area which has received insufficient attention as compared with the regulatory requirements and guidance on algo trading itself.
- Regulation – current and increasing regulatory supervision focuses mainly on operational and conduct risk. While this does not cover all potential risks, mitigation levels are reasonably robust.
- Model risk – Experience of using algos is valuable in spotting risk in new and more complex models. However, as algos are employed in the context of less liquid asset classes, risks may increase, for instance in pricing from sparse data. Mr. Ghose cites Federal Reserve System Board of Governors’ paper SR11-7 as a useful touchstone along the path of tighter model risk understanding and control.
- Nonetheless, in pointing out the unique nature of particular algo models, validation for one model may not work so well for another. In addition, combinations of algos in a trading system may require a more complex validation process. This poses a challenge to existing model review processes.
- Moreover, data quality and quantity risk are always potentially present. “Poor data quality and governance,” the report says, “can create operational risks and conflicts of interest from inappropriate use of private client data and incorrect or inadequate interpretation of data sources.” This can again spring from applying algos to new asset classes which lack high levels of data quality as compared to large, liquid and data heavy markets such as spot FX major currency pairs.
- Benchmarking – peer comparisons of proprietary algos and models are difficult. This points to the importance of performance monitoring. If you can’t examine the inner workings of algos, you must, at the very least, examine carefully what their outputs are, the report seems to be saying.
- Consequently, the report clearly says, “Given the differences between pricing or risk and algorithmic trading models, different model validation approaches may need to be developed, where the control framework should be considered in deciding the model risk rating and any subsequent validation and testing requirements.”
- Again, combinations of algos, may produce unintended consequences, beyond the results produced by their individual components. The author underscores the difficulty of developing testing methodologies despite existing guidance and experience.
- The eighth key factor raises the need for technical expertise to be resident in second lines of model risk assessment and defence. This may work to the advantage of large well-staffed firms over smaller ones.
New asset classes
The report devotes a section to the spread of algos to new asset classes. Noting the transaction data limitations for many OTC derivatives, corporate and emerging market bonds, Ghose is supportive of the opportunities for developing algos that may be built upon artificial data sets. The use of unstructured data can also play a part though there could be governance challenges in maintaining data flow.
Accessing data in critical market moments can also create unintended and unwelcome results, as some markets have experienced for example in flash crashes. In this context the report draws attention to the importance of the availability of public reference prices for market, and indeed for model stability.
This section of the report concludes with pointing up the market concentration risks in less liquid markets as well as those connected with operational features such as hold times.
The section of the report covering machine learning begins by saying, “When trading engines are powered by machine learning, the relationship between data inputs and price outputs is much more obscure.” Worryingly, it goes on the add, “The difficulty of tracing how decisions have been made by the machine make it very difficult to prevent in advance, or to correct afterwards, undesirable model outcomes.” It is these transparency concerns that give rise to the regulatory and governance focus on “explainability,” model risk management and software validation.
The report considers the challenges of model drift, bias, market concentration and correlation and the lack of “expert programmers, data scientists and risk managers who can safely develop, test and implement machine learning in financial markets.”
Speaking to us shortly after the publication of the review, Rupak Ghose explained how important it is for financial firms to be the thinking about these issues now.
“Machine Learning is in its infancy but there’s a lot that’s going to happen there. So we think model risk is an area that will be more important going forward with increasing adoption. There is official guidance around model risk generally, for example the Fed’s SR11-7 and around algo-trading in general. There is also detailed bilateral governance around how models apply in algo trading, but there is an opportunity for the industry to take a lead in this area, including through formalising existing bilateral conversations and creating market standards. As a market-led organisation, we [the FMSB] spend a lot of time with both sides, with regulators and market participants.”
A key point of reference throughout the report is the importance of “sandbox” conditions for testing algos, experimenting with machine learning and understanding model risk. It is in the market’s interest to ensure that its participants have skilled and able staff to test models thoroughly before unleashing them on the market. Doing so will likely satisfy regulators. Mistakes made and markets compromised will serve to invite greater regulatory scrutiny that will be unlikely to support innovation and development over safety and protection for the integrity of markets and their participants.
A key point of reference throughout the report is the importance of “sandbox” conditions for testing algos
The review notes the growing use of execution algos, principally to reduce cost and market impact. In addition, the Markets in Financial Instruments Directive (MiFID) II has obliged buy-side firms to be able to prove best execution and algos have an important role to play here.
Buy-side firms faced with choosing the most appropriate algo for their purposes may use algo wheels to help them.
Ghose points out however that it is important for the buy-side to know how they operate and what data inputs they are privy to. “With the proliferation in the number of algorithms being offered, having a clear view on execution strategy and which algorithm is most suited to delivering on these goals is important, as these vary depending on asset class and product liquidity.”
The overriding message in relation to execution algos is caveat emptor, and that requires buy-side firms to establish risk management procedures to understand what they are buying into. Not all buy-side firms are equal, however, when it comes to knowledge and experience of execution algos. Some are sophisticated users, some are less so. The advice the report offers to sell-side firms is that, “…disclosures need to be easy to understand for end-user clients of varying degrees of sophistication, so that they can match their individual execution requirements with the most appropriate execution algorithm.”
The report concludes by advocating the desirability of introducing guidelines for algo model validation. The implication is that the market has a great deal of knowledge and pooling it is desirable in order to establish standards and reduce risk.
Mr. Ghose notes that the use of algos has spread to FX and on to other FICC markets from markets where their use has been common for many years, cash equities for example. The skills and technology required are similar and this contributes to making new markets more automated and, assuming the quality of the data is good enough, fairer and, hopefully more transparent.
Couple this knowledge with machine learning and sell-siders and buy-siders will have increasingly powerful tools to work with. ML is still a relatively new world however, which means that testing and model risk management understanding will need to pedal hard to keep up. So the key, over-arching message highlighted by this FMSB review is that there is an area of risk here that needs to be addressed, and soon, given the pace and growing complexity with which the use of algos is evolving.
The document concludes by saying, “Areas of such rapid technology change are also often best addressed by market practitioners with deep domain expertise who can develop solutions that are clear, practical and proactive in managing risks.” Where such expertise can be found or developed and at what cost are open questions that markets also need to address.