Martin Zinkin and Jeff Leal

Build versus Buy: Exploring the relative merits for FX algo trading frameworks

August 2023 in Algo Tech

Exploring the relative merits for algorithmic FX trading infrastructure frameworks

By Martin Zinkin and Jeff Leal, Co-Founders of QubeAlgo

The market penetration of electronic trading has increased to an extent that means most capital market participants now require at least some electronic trading capability. The trend is likely to continue, and the level of sophistication and automation needed is also likely to increase. Buy-side participants’ use of electronic trading typically focuses on execution (particularly improving execution strategies and measuring execution performance) and automation of alpha strategies. Sell-side participants need to provide liquidity to their clients across an increasing range of products, risk manage and analyse resulting flows, and offer client services such as algorithmic execution and portfolio trading. Both need to use technology to analyse trading and market data, to offer trading and data tools to voice traders, and to increase automation levels across the business to improve efficiency and reduce costs. Where a business is successful in this endeavour, it further increases pressure on competitors to follow suit.

Organisations are faced with a strategic decision – should they attempt to build the technology they need in house, buy it from one or more vendors, or pursue a mixed strategy? In this article we discuss some of the issues with each approach – some obvious, some less so – and discuss some factors that should be involved in the decision.

For algo providers technology is often a key differentiator

For buy-side HFT firms and the largest sell-side liquidity and algo providers, technology is often a key differentiator and most or all of it is usually built in house – both to safeguard IP and because the technology itself constitutes an important part of the value proposition. For most participants, however, technology is an enabler: an important part of a wider value proposition, rather than a unique differentiator. These businesses can make pragmatic decisions about what to buy and what to build.

Sophisticated electronic trading systems are not easy to build. Systems are fundamentally event-driven, the data is often high frequency, fundamentally asynchronous, and disparate, and data volumes are high. Management of the complex internal state of models and algorithms, while handling race conditions and threading, is unavoidable but difficult and error-prone. Low latency, high throughput and high availability are frequent requirements. Finally, errors in systems able to commit capital without intervention – whether by executing orders on external markets or by trading directly bilaterally – can result in very large financial losses, while, because of the features above, it is often very difficult to reason about the correctness of models and algorithms. This means that where there is a need to iterate and innovate, safety and testability are critical.

The obvious attractions of buying software are faster time-to-market, a likely lower initial investment, and lower likelihood of complete project failure, since in most cases the software is already in production with other clients with similar needs. Less specialised internal personnel are needed to support vendor technology than in-house technology, and many vendors offer cloud deployment of their systems, which can further reduce internal requirements for both hardware and personnel. The vendor will usually manage connectivity to external systems and upgrade interfaces when needed.

The downsides are primarily loss of flexibility, reduced ability to innovate, dependence on the vendor’s development and release cycle, long-run costs and vendor lock-in. Often a vendor selected on the basis of the fit to immediate business needs in one area – for example trader execution tools – is a poor match for later needs in other areas or asset classes, for example client liquidity provision. It may then be necessary to add further vendors to the mix, increasing the cost base, requiring additional internal expertise, and limiting the ability to develop internal synergies across business areas and asset classes.
‘Open’ vendor systems – those that allow clients to integrate their components – offer a middle ground, and are likely a better fit where business needs are likely to evolve significantly over time. The issues with those systems are primarily cost and vendor lock-in: having spent significant time and resources developing components that integrate tightly with a particular vendor, any later move away is likely to be difficult, time consuming, and expensive. Furthermore, while these systems do make deployment of components built in-house less difficult and resource intensive, they typically do not help in the development of those components.

Management of the complex internal state of models and algorithms is difficult and error-prone

Building most functionality in house can avoid these issues, and offers the prospect of a system focused precisely on business needs, with the ability to rapidly develop new functionality while maintaining complete control over the IP and code base, and avoiding dependence on any key vendor. However, it presents significant challenges. Sophisticated systems are not easy to build, so an experienced team is necessary – and even then time and cost overruns are common, the resulting systems are often less flexible and performant than anticipated, and project failure is a real risk. Most developers lack deep cross-asset and cross-functional experience and knowledge and the development is often driven by business users in a single asset class, so even when an attempt is made to design for the future, the outcomes are often less general than anticipated. An in-house build also requires a long-term commitment: unless the system continues to evolve, key developers will drift away, and system quality will gradually deteriorate, with key man risk becoming a real issue.

Key decisions

In deciding how to best meet electronic technology trading needs the first requirement is a realistic representation of the immediate and future business needs – both in terms of asset class coverage and functional needs – and to understand the business’s source of added value. If the initial project specification is too focused on the immediate need in a single asset class, a system may be selected that is too narrow – leading to selection of a vendor or an in-house build that can’t satisfy future needs.

Conversely, if future business needs are overestimated, a system may be specified that is suboptimal for the actual business. For example, for a business focused on servicing clients in vanilla products, the ability to rapidly respond to client demands for new product liquidity, execution capabilities, or analysis tools may be more important than ultra-low latency performance or extensibility to OTC derivatives.
Where the business need is well understood and future extensibility is less important, a complete vendor solution is likely a good option – and for businesses with limited in-house technology and quant resources a vendor may well be the only feasible choice. In these cases a straightforward selection based on functionality, performance, and cost is a good option. For larger businesses or where future extensibility is important, the choice is likely to involve a mix of vendors and in-house build. In most cases, market connectivity (that is adaptors to exchanges and ECNs for market data and trading) is likely best sourced from a vendor. Except for ultra-low latency funds, a vendor connectivity solution is likely to outperform an in-house build in cost, performance, and reliability. Businesses should simply ensure that they interface with their connectivity vendors in a way that avoids lock in, and take care to avoid contracts locking in the potential for large fee increases or limiting flexibility to add and remove components and functionality from the vendor at reasonable cost. Where a business opts for an open vendor system with additional self-built components, efforts should be made to retain the optionality to migrate to an alternate vendor or to a in-house build, for example by introducing abstraction layers between in-house components and vendor middleware and data services.

Except for ultra-low latency funds, a vendor connectivity solution is likely to outperform an in-house build in cost, performance, and reliability

Where a business opts for either an in-house build or an open vendor because there is a need to develop at least models and algorithms internally, thought needs to be given to how those components will be developed and tested. An effective framework for the development, deployment, and monitoring of algorithmic applications should provide the flexibility to adapt, encourage collaboration and instil confidence that solutions will behave as expected in a production setting. Development and sophisticated visualisation tools, clear interfaces to live and historical data, and auditability are some necessary features in allowing users to build sophisticated algorithms. However, they also play an important role in the transparency and control necessary for risk management and compliance. Ultimately, the path towards an effective algorithmic trading framework doesn’t end with the initial decision, it entails the ongoing commitment to adapt and improve its capabilities with the evolving business goals and market environment. The importance of this flexibility should not be underestimated.

Our product at Qubealgo is targeted at these issues – which we plan to cover in a later article.

Ultimately, the path towards an effective algorithmic trading framework entails the ongoing commitment to adapt and improve its capabilities