Innovation in execution technology has been rapid, but until recently the tools required to successfully deploy and manage these new tools have lagged far behind. Algo strategies that may be considered quite simple in their approach – “commoditised” even – are highly sophisticated execution tools which can vary significantly between providers. Moreover, the choice of strategy is only one of a number of key decisions that need to be made as part of the broader execution process. In this article we’ll review some of these factors and consider ways in which they may be addressed.
It is well understood that different participants have different motivations for starting to use algos: reducing transaction costs in comparison to trading in full on a “risk transfer” price; achieving a representative average over an interval, or perhaps taking a directional view and seeking to average into (or out of) a position.
This informs the benchmarks that are chosen to be measured against, and consequently the strategies appropriate to achieving or outperform those benchmarks. This seems a simple exercise but we should consider what the implications of this benchmark selection may be. Consider the TWAP – perhaps the most widely used algo strategy. A TWAP is designed to replicate a simple average of market prices over the trading interval – effectively seeking to minimise tracking error. If we achieve a good result against the TWAP benchmark, we might assume we have achieved a positive outcome.
Should we rotate our panel of providers and strategies, and if so how?
What would we say if we were to discover that the TWAP created significant market impact during the interval? Superficially our performance might look good, but ultimately we are measuring ourselves against our own activity in the market. For a trader seeking to exploit a favourable move this would be a very poor outcome, as the algo has reduced her opportunity to benefit from the averaging effect; and even as a trader merely seeking to track “the market”, can we in all honesty say we have achieved a good outcome for our clients if we see significant market impact, albeit alongside great benchmark performance?
Some key decisions need to be made
So we can see there are subtleties here that we should be aware of, which influence a number of key decisions we must then make. Which providers should we consider using? What kind of liquidity is suitable for our trading objectives – should we be seeking to access external market liquidity or are we better off trading with a provider that can demonstrate an ability to internalise? How much should we be involved in the liquidity management process? Should we rotate our panel of providers and strategies, and if so how? And perhaps most importantly of all, how should our traders interact with these strategies? Back to our TWAP example, again its superficial simplicity belies some complex decisions to be made, particularly around timing. Setting an appropriate interval for a TWAP algo is not an easy task – too short and we risk creating unnecessary market impact, too long and we take unnecessary market risk.
As the algos get more complex and parameter driven, this task becomes progressively harder. Many algos have an urgency setting which the user can increase or decrease depending on their tolerance for both market impact and market risk. Unsurprisingly these settings differ between providers and are calibrated by individual currency pair. We may now be faced with the unenviable task of trying to compare provider A’s “medium” setting against provider B’s “number 3” urgency. We have to make choices about which algo to use, and at what speed. We may also have instructions not to trade beyond a certain level, and maybe even to opportunistically consume liquidity if the market comes in our favour to a certain degree.
Addressing these questions
Suddenly what appears to be a simple, seemingly automated process has become a very complex one with many moving parts. How do we even begin to address these myriad questions? This is where analytical tools, powered by rich, high quality data come into their own. A number of innovative companies have emerged in recent years who are helping their clients start to address some of these issues. It’s important to recognise that this goes far beyond what one might think of when hearing the phrase “TCA”; satisfying one’s regulatory obligations is of course of paramount importance, but meaningful data analysis goes far beyond that – it is a commercial exercise as much as a regulatory one.
These tools are designed to assist throughout the algo process – post-trade analysis can be used to assess and compare strategies, providers, liquidity and trader involvement, while pre-trade capabilities can provide important decision support tools to traders before they interact with the market. They can give us a better understanding of how algos perform and how we can best utilise them.
In any conversation about algos these days it is only a matter of time before one hears the words “machine learning” and “artificial intelligence”. Without question these play a hugely important role, in particular for providers of algos who rely on them to optimise both the execution strategies and the liquidity with which they interact. I would view the recent development of “algo wheels” – the automated selection and calibration of different strategies – with some caution however.
Before trying to automate anything we should first be confident that we have a solid understanding of the tools we are using and how they may behave – otherwise we risk trying to come up with an answer to a question we don’t yet fully comprehend.