For those who build algo products, the safeguards are just as important as the functionality. We don’t like a client having a single unexpected outcome, let alone collecting the label of causing a flash crash.
With algo execution becoming more sophisticated (and risky) as AI and machine learning are more prevalent, and with new algo tools frequently being brought to market, the cornerstone that allows the continuation of well-behaved algos are the safeguards and controls they have in place. Although MIFID II’s scope did not extend to spot FX, some of the more recent expectations published from regulators, such as PRA – PRA Supervisory Statement SS5/18 – Algorithmic Trading June 2018 – and FCA – FCA Algorithmic Trading Compliance in Wholesale Markets February 2018, indicate that regulators expectations with FX algos are similar.
In this article we look at governance and controls that could be in place to safeguard algo execution.
Governance
An algo trading governance framework provides the necessary structures for oversight of algo trading in a bank. Central to this framework is the algorithmic trading oversight committee. This committee ensures regulatory compliance and considers best practices by bringing together compliance, risk, technology and trading functions with a mandate to oversee algo trading within the firm. This enables second-line and other non-trading stakeholders to have input into safeguarding algo platforms.
Part of the oversight committees’ function is to determine a minimum set of controls for different algo flows. These will change over time as the firms algo practitioners pass information, perhaps from lessons learnt, up to the committee which can then be discussed and passed down to all algo teams as a new control or as an operating guideline. In this manner the firms algos will collectively continue to evolve and improve safeguards.
An algo oversight committee has multiple roles including ensuring:
- Firmwide regulatory compliance and best practice,
- A banks inventory of algos is complete,
- Controls for each algo are sufficient and evidenced,
- Independent validation and calibration of controls,
- Approval of new algos,
- Change processes are adequate,
- Tracking of algo incidents and remediation.
Of course, for a full understanding of a firms risk with respect to algo execution, an algo inventory should include any algorithm used within the firm including those built by external providers. Most algo platforms will have material available on the controls of their algo platform and that should be a good starting point.
Layers of controls
The algos in the inventory will be subject to a minimum standard of risk controls. To be effective these risks should be mitigated in multiple locations within the technology stack rather than just a single application. For example, a risk mitigated by the algo engine might be redundant if there are no controls in the routing layer and issues occur at this point. To analyse the overall effectiveness of controls, each component of the infrastructure should be considered, and the question asked: If component X has an issue what safeguards are in place and how do other components react? The mere existence of a control does not mean the risk has been adequately mitigated although a box may have been ticked. Any compliance or risk officer in a bank therefore needs knowledge of the infrastructure stack when considering if controls are adequate.
Calibration and validation
If parameters are not correctly calibrated, then even well-constructed risk mitigation may have reduced relevance in practice. Parameters should be calibrated to levels that effectively capture the relevant risk. Consider an inverted eur/usd order book in normal market conditions, a one pip inversion could be based off genuine prices and be tradable whereas a 10 pip inversion could be a market data issue which needs to be investigated.
The calibration of this control therefore would be incorrect at 1 pip and more relevant at 10 pips. Calibration that is too loose may miss issues that should be captured, whereas calibration that is too tight can create other problems by triggering too often. False positives create noise which reduce the relevance of the control.
It is also necessary to regularly validate that controls work and conduct parameter reviews to ensure the effectiveness of controls do not decay over time. A good mechanism for this is to have regular independent assessments of the control framework.
For those who build algo products, the safeguards are just as important as the functionality
Some basic controls
It is not possible to show an extensive algo controls list here, however the PRA (section 3.4) lists a few of their priority controls with the following:
At a minimum, the PRA expects there to be risk controls that limit exposure to a counterparty, order attribution, message rate, frequency of orders, stale data, and order and position size (including in relation to market liquidity).
These controls are important and form a subset of a wider group of risk controls needed to ensure safe trading of algo products. Some examples are:
- Throttling: Restricting the number of orders sent to market over some time frame.
- Price validations: Ensure any order sent to market or received by an algo engine is within some tolerance of current market price, especially applicable to aggressive orders.
- Notional limits: These encompass notional limits of orders sent to market and also notional limits of orders received from clients. Should be configurable by currency pair.
- Stale data: Ensuring accurate and up to date market data is being used. There are a number of implementations that can be used here including time-stamps, price checks, submerged order checks.
- Hard Limits: A price the algo will not trade beyond. In a flash crash scenario this price limit could be activated.
- Kill Switch: A mechanism to manually shut-down the algo engine in times of market stress or to prevent the algo from trading in the market.
Controls can also cover a) how the algo is built, for example, a control might be to ensure risk parameters are always adequately tested before a change is made to an algo; b) how the algo handles specific situations, for example, overfills from downstream systems; c) how the algo is supported, for example, alerting mechanisms and server and log monitoring; and d) how the algo handles extreme market events, for example, minimum throughput thresholds in active markets.
Conclusion
Quantitative performance of algos has always been an important consideration, but it is increasingly relevant for an investment firm to consider whether proper governance and safeguards are in place. Many buyside firms already ask about algo governance when doing due diligence on banks algo platforms, and with recent papers from PRA and FCA outlining expectations, this is likely to become a bigger focus. It is worth considering what your algo provider’s hard limits are or what controls will trigger during a flash crash. You can feel more comfortable if your algo provider has multiple controls and good governance practices.