Will good practice make for better FX algos?

June 2023 in Previous Features

Algorithms have enjoyed persistent growth in the FX markets in recent years. And all the signs are for continued expansion, with more buy-side firms appreciating their ability to reduce trading costs and improve execution quality across an increasingly fragmented FX universe.

In a recent Greenwich Associates survey, almost 60% of buy-side institutions reported that algos had lowered their FX trading costs. This suggests the widening of algo use from sophisticated active traders and hedgers to the passive mainstream is well underway. Greenwich asserts algo execution will rise from the current 10% of all client-to-dealer FX volumes over the next two to three years, driven by regulatory initiatives and the increasing best execution expectations. “We anticipate meaningfully more volume will be executed via algos in the near term,” the report concludes.
As in the equities markets, growing FX algo use has been accompanied by concerns over systemic risk, and efforts by regulators to ensure algos are being used safely and effectively. With human-controlled trading giving way to faster and more automated methods, the regulatory framework is adjusting to account for new risks, prompted partly by events.
The FX market has not suffered a shock comparable to the US equities ‘flash crash’ of 2008. But sterling’s sudden 9% fall against the US dollar in October 2016 and the derailing of many algos by the unpegging of the Swiss Franc in January 2015 have stiffened the resolve of regulators to strengthen safeguards.

Numerous initiatives – some voluntary, others mandatory – are shaping algo development and deployment, but perhaps the most significant is the UK Financial Conduct Authority’s (FCA) recent guidance statement, which both clarifies and builds on MiFID II.

FCA good practice guidelines

In February, UK regulators outlined their expectations for supervised firms in terms of algorithmic trading. The FCA’s ‘Algorithmic Trading Compliance in Wholesale Markets’ report identified areas where current practice is insufficient to meet their obligations, specifically under MiFID II. In parallel, the Prudential Regulatory Authority (PRA), responsible for prudential supervision of the UK financial sector, published for consultation proposals for governance and risk management of algorithmic trading.

Taken together, the papers underline UK regulators’ intense supervisory focus on the potential systemic risks of increased algorithmic trading. Based on reviews of firms’ current arrangements, the FCA’s review says firms must address failures in five areas: definition of algorithmic trading; development and testing processes; risk controls; governance and oversight; and market conduct (these priorities are largely reflected in the PRA’s governance framework).

Notably, the PRA’s draft supervisory statement will apply to all algorithmic trading activities “including in respect of unregulated financial instruments such as spot FX”. Whereas MiFID II does not apply explicitly to FX spot, many firms have decided to include all algorithms within projects to overhaul their algorithmic trading operations, in part because FX spot forms a reference input to instruments covered by MiFID II.
Algo users must address the gaps between reality and required practice quickly. PRA-regulated firms in scope of the Capital Requirements Regulation and UK branches of third-party firms will need to comply with proposed PRA standards from 30 June 2018. Firms regulated by the FCA (including some not defined as investment firms under MiFID II) should not expect an extended period of grace, based on the regulator’s practice of enforcing rules rigorously after specifying best practice.

Underlining the need to minimise risks and avoid threats to market integrity, the FCA explicitly notes that the report’s examples of good practice are not the only ways to comply but warns that poor practices outlined show where firms “now need to do further work”. “The FCA document moves on from MiFID II by emphasising the importance of trading in a fashion that does not threaten market integrity. This is in line with IOSCO’s direction of travel and has started to be reflected in other major jurisdictions. In the long run, firms won’t be able to avoid their responsibility to preserve market integrity by leaving Europe,” says Nick Idelson, technical director and co-founder of specialist vendor and consultancy TraderServe.

ALGORITHM INVENTORY

Firms need to establish and maintain a comprehensive inventory of algorithmic trading strategies and systems across the firm. 

Good practice

Firms who retain a detailed inventory across the business, with documentation which clearly sets out:

  • the different types of algorithms, trading strategies and systems, including relevant operational objectives, parameters, and behavioural characteristics
  • a breakdown of the various components/algorithms contained within the strategy or system
  • the owner and those approved to operate the strategy or system
  • policies on the completion of development, validation and testing procedures, along with appropriate sign off from senior management and other relevant control functions
  • technical details of the coding protocols used during the development process and the overall system architecture
  • relevant market information, including regulatory and venue requirements
  • a comprehensive list of all the risk controls (including kill functionality) which apply to each strategy or system, including overall risk limits and those set within the individual components/algorithms

Poor practice

Firms who don’t have clearly defined inventories in place and are only able to provide generic high-level descriptions for their algorithms, strategies and systems.

DEVELOPMENT & TESTING FRAMEWORK

All firms engaged in algorithmic trading need to maintain an appropriate development and testing framework, which is consistently applied across all relevant aspects of the business.

Good practice

Firms who maintain a robust development and testing process supported by:

  • appointing a project lead with responsibility to oversee the entire development and testing process and ensure consistency
  • breaking down the development process into separate phases in which firms are able to establish independent checks and balances at each stage, particularly where subjectivity is used
  • ensuring thorough due-diligence is completed at the start (and at key milestones) of the process, to ensure that any conduct risks are effectively assessed and suitable risk control thresholds are established
  • encouraging a culture of open communication between different business units, while having a clear separation of roles and independent reviews. Often this was achieved by having a separate team that verifies and checks the output and quality of code
  • ensuring all these tests were recorded in the development plan and included in the information provided to the appropriate decision makers for formal approval and sign-off

Poor practice

Firms who don’t consistently apply their development and testing process across all aspects of their business. For example, different trading desks and/or business lines use different methodologies.

Defining and documenting algorithmic trading processes within a coherent governance framework is a considerable challenge

Defining and documenting algorithmic trading processes within a coherent governance framework is a considerable challenge

MiFID II: Beyond best execution

MiFID II’s best execution rules (RTS 27 and 28) are already having a significant impact on buy-side trading practices, especially in markets such as FX where execution performance was traditionally subject to less intense scrutiny. Increasingly, all the executions conducted by institutional trading desks are being accorded similarly high levels of attention. As such, common equities market tools and techniques such as algos and transaction cost analysis (TCA) are becoming more commonplace in FX and fixed income, to prove compliance with best execution principles.

“As well as regulatory factors, pressure from stakeholders is also driving greater buy-side adoption of algorithms and analytics in pursuit of best execution in the FX markets. Activist shareholders and senior managers expect trading desks to minimise performance drag through data-driven decision-making. Together with increasing liquidity fragmentation, these factors will only increase use of FX algos,” says Ollie Jerome, co-founder of BestX, an analytics and technology provider.

But MiFID II’s RTS 6 and 7 are the key sections when it comes to how firms should ensure their algorithms are sufficiently well designed, tested and documented for use in live market conditions. These build on ESMA’s 2012 guidelines for conduct of electronic trading as well as the Market Abuse Regulation of 2016. To ensure algos do not cause disorderly markets (and can be stopped quickly if they do), RTS 6 and 7 stipulates appropriate systems and controls to support safe interaction with trading venues, as part of a formalised governance framework within which compliance managers and senior staff must have the understanding and operational ability to supervise algorithmic trading operations effectively, including use of automated surveillance systems, pre- and post-trade controls, and real-time monitoring. If all these checks and monitoring activity fails, firms must have the ability to deploy ‘kill functionality’ and cancel unexecuted orders in the event of market disorder. All these arrangements are subject to annual self-assessment and publication of a validation report.

In addition, change management procedures must be in place to fully document any material changes to how algorithms work, with separate testing and production environments, as well as the ability of the compliance team to monitor algo performance in real-time, separate from the algorithmic trading department. Much of this is current practice, in theory. But the combined effect of raising the bar and adding new requirements pose some stiff challenges, including the following:

Defining / documenting – According to Charles Mo, GreySpark’s head of trading solutions and infrastructure, defining and documenting algorithmic trading processes within a coherent governance framework is a considerable challenge, despite overlaps between equities and FX algos. At base level, firms have been gradually filled in gaps between policies drafted centrally to meet regulatory requirements and the procedures and controls created and implemented locally.

“Particularly where equities was an established business, from an algo perspective, many banks already had deep if incomplete levels of documentation, but the approach was not necessarily scalable across other asset classes. In many cases, the existing documentation has not provided the head of risk with sufficient transparency and confidence that controls implemented on the ground are fully aligned the firms’ policies. Those gaps have to be filled,” says Mo.

MiFID II requires firms to certify and categorise different types of algorithms, due to the broad range of applications and programmes in the trading process that fall under MiFID II’s broad definition, with each category requiring different levels of documentation. “Defining and classifying algos is a big piece of work,” says Dan Simpson, head of research at regulatory consultants JWG. “Even though MiFID II’s initial definition of an algo was refined, it still encompasses a wide range of automated tools. Firms have to document every algorithm within scope and even explain why they believe other tools are outside of scope.”

Governance framework – As the text of MiFID II and the FCA report attempt to emphasise, compliance is not a one-time push. Regulated firms must ensure algorithms are developed and deployed in a safe and compliant manner on an ongoing basis. One approach, advocated by GreySpark’s Mo, is to institute an ‘electronic trading oversight council’ with a charter signed off by board members. This not only ensures standardisation of approach to documentation, testing and deployment across asset classes and geographies, but also supports senior level visibility and sign-off on new releases and material changes, as well as bedding in the cultural shifts needed to address market conduct obligations and cement new processes in the long term.

“Firms should focus less on point-in-time MiFID II compliance than on ensuring that behavioural change is supported through implementation of appropriate principles, procedures and policies, including on remuneration. Accountability and disclosure must become intrinsic to the BAU activities of the firm. Only then can the risks be truly mitigated and the very real threat of jail avoided,” says Mo.

JWG’s Simpson says MiFID II’s ongoing monitoring requirements – effectively necessitating the establishment of a separate team to track live algo performance within the compliance team – could be particularly onerous. “It’s not just a matter of cost. Firms are finding it difficult to recruit staff with the necessary skills: you need intimate knowledge of firm’s algorithms and experience of market conditions,” he says.

There is potential for the structured governance framework around algo trading demanded by MiFID II to constrain innovation and customisation. A precedent was set in Hong Kong when the Securities and Futures Commission obliged users and providers to attest to their competence in automated trading. This quickly led to a more structured approach to customisation, with client-side tweaks reduced to levels of urgency. Simpson suggests MiFID II may further cramp customisation. “MiFID II requires that every algo has its own individual ID, with a record of all material changes, including testing and deployments. As such, firms may limit the frequency of tweaks as the process will be burdensome,” he says.

Testing – Whilst MiFID II demands thorough and separate algo testing, the requirement to conduct and self-certify market disorder testing is more challenging still. Testing functionality and conformance with venues has typically been done via replay facilities. But this does not create the dynamic trading environment, nor the stressed market conditions (including latency fluctuations), necessary to test how an algorithm responds to extreme volatile trading conditions.

The FCA cites as good practice the use of dynamic testing environments that examine not only how an algo behaves in existing market disorder circumstances but also whether it further contributes to market disorder, including in combination with other market participants. Further, it specifies that “market conduct considerations need to be a vital part of the algorithm development process”, specifically noting that “development procedures which predominantly focus on operational effectiveness” are insufficient. The FCA states that it is poor practice for firms to be unable to demonstrate the potential impact of their algorithmic trading strategies on market integrity. “It’s no defence to say that you didn’t think your algo could cause or contribute to market disorder. Making sure it won’t is a considerable undertaking,” says TraderServe’s Idelson.

The supply of such facilities is extremely limited, meaning the market as a whole has not got to grips with this requirement. Many banks are backtesting algorithms and setting up controls to keep them stable but reserving the ability to deploy kill functionality if necessary. Because killing an order can be extremely disruptive, Mo advocates the development of mechanisms to intelligently control algorithms, detecting their closeness to a breach to avoid having to take the ultimate sanction. “Before long, we’ll move to a new paradigm in which the algorithm will be coded to include its own control engine, using a standard library set that can be applied across the whole suite,” he says.

JWG’s Simpson expects the FCA to take a relatively pragmatic line too, at least at first. “The FCA document lays the groundwork for a future thematic review to provide recommendations and clarifications on its expectations. Depending on how requirements are being met, the FCA might provide further guidance within the next year or so, but I would not expect any fines just yet.”

SENIOR MANAGEMENT

Senior management should be able to articulate the standards set for development, testing and on-going monitoring of their algorithmic trading strategies.

Good practice

Firms where algorithmic trading is fully understood by senior management, who play a key role in providing challenge across the business.

For example, where senior management are involved throughout the development and testing process and actively seek to understand the potential market conduct implications.

Poor practice

Firms where senior managers are not able to demonstrate the required knowledge to be able to provide effective challenge to front line algorithmic trading operations.

Source: FCA – Algorithmic Trading Compliance in Wholesale Markets

Regulated firms must ensure algorithms are developed and deployed in a safe and compliant manner on an ongoing basis.

Regulated firms must ensure algorithms are developed and deployed  in a safe and compliant manner on an ongoing basis.

Toward transparency

Once the necessary policies, processes and controls have been put in place for the development, testing and deployment of FX algorithms, GreySpark’s Mo says sell-side firms should look to aggregate, leverage and share the resultant data and documentation between internal and internal stakeholders. A streamlined central repository for algorithmic trading may support real-time monitoring of algorithm performance, including risk management data, i.e. warning of proximity to limits. It might also flag when an algo is due for periodic testing, documenting its development and testing history as part of the change management log required under MiFID II.

Such a facility might also help algo providers to meet their obligations under the voluntary FX Global Code of Conduct, which includes guidance on the supply of algorithmic trading services among its principles of good practice. Specifically, the Code calls for service providers to supply clients with sufficient data to understand FX algos and evaluate their appropriateness to the user’s execution strategy.

“The global code has had wide adoption already and we expect all major sell-side firms to have signed up by the summer. It sets a pretty high level for disclosure of information necessary for clients to evaluate algo performance, including on fees, and whether the provider is acting as agent or principal,” explains James Sinclair, executive chairman of FX platform operator MarketFactory, and co-chair of the Code’s Examples Working Group.