Anna Reitman

Taking a more systematic approach to FX algo selection

June 2023 in Previous Features

As firms increasingly access electronic execution for FX, the right decisions for deploying and selecting algorithms are, as Anna Reitman discovers, a combination of getting knowledgeable about market technologies and tapping into the lessons of traditional wisdom.

For those interested in becoming familiar with the world and language of FX algos there have been some lessons to learn from other asset classes who are ahead of the curve.

Mark Goodman, head of Electronic Execution Services in UBS’ FRC division, said: “Algo usage becomes a wave that everybody rides, and if we look at the equity world you see some slight pull back from that at some point, which suggests that people get on board and overuse a new solution.”

Developing an algo usage, or non-usage, strategy is ultimately a trade-off between time risk and market impact, Goodman explained, pointing to events like Brexit or the recent US election as examples. In such volatile conditions, clients may favour immediate risk transfer over “wearing” the time risk.

There’s also the maturity of the algorithm market itself to consider, he added: “Algos are still relatively new, so in times of stress people go back to the old ways of trading.”

But the market is maturing, and along with it a swathe of performance measurement techniques that can help firms decide whether an algorithm is the best route to best execution.

UBS analysis of the performance of the bank’s own algorithms is focused on comparisons to the available risk price at point of trade. In other words: would a client have got a better result using risk transfer or an algorithm?

“That is a client’s first stage of selection: should they use an algorithm or not,” said Goodman.

If the answer is “yes”, then his next recommendation to clients is to engage with the broker, because deciding which algorithm to deploy based on its underlying behaviour can be confusing.

Goodman noted that an average client uses two or three algorithms, out of a dozen available. And it’s a choice that depends on the “spectrum of market impact versus time risk” those clients are trying to achieve.

A beta-focused long-only manager will be unlikely to take a directional market view, and therefore has no need to pay market impact, in which case they will spread the trade out over time using a TWAP or low participation algorithm. However, where a directional view is taken the client will want to reduce the potential to lose alpha over time and therefore choose a more aggressive algorithm, even where it will have market impact.

“Trying to find the optimal point on that curve is actually what you are doing when you are setting the algorithm and the parameters around it,” Goodman said.

He added that understanding which algos are best to use has little to do with how they are named, particularly when that name is reflected in a benchmark to measure performance. TWAP, for example, is a fairly common benchmark, but as an algorithm it can be used to target other benchmarks as well, such as arrival price.


Mark Goodman

“…the better informed the client is by the broker about how the algos behave, what different parameters will do to the decision, the better outcome the client will have.”

Compared to other asset classes, choosing benchmarks in FX is somewhat more complicated. Writing for e-Forex, Pete Eggleston, co-founder of BestX, a firm that provides independent transaction cost analytics and technology, explained that the OTC nature of the FX market has resulted in a dearth of independent benchmarks that are standardised and recognised across the industry.

And the FX market does not have a “National Best Bid and Offer” concept as in the equity markets. As such, classifying the currency transaction as “alpha” or “beta” is a key first step in benchmark selection, and hence subsequent algo selection.

For trade types that can be classified as alpha, typical benchmarks include arrival price and risk transfer price, with typical usage by investors such as CTAs, currency funds, and global macro funds. For trade types falling into the beta category, typical benchmarks include TWAP, VWAP, WMR and the ECB Fix, typically used by corporates, mutual funds and pension funds.

Alex Shterenberg, head of eFX trading, North America and global head of FX algos at Bank of America Merrill Lynch, said that of the common benchmarks used by clients, WMR may be a traditional choice but also a potentially poor one because it doesn’t optimise trade transaction costs in terms of price or slippage.

In the post-FX fix scandal world, the benchmarks clients used to take for granted are evolving to reflect a more analytical approach to performance measurement.

The top three important benchmarks Shterenberg points to are: arrival price, average price, and market reversal. The most frequently used is arrival price, which compares the results the algo achieves to the risk transfer price at arrival for the size and currency pair a client is trading.

Average price compares the price actually achieved by a selected algo to the average price in the market for the duration of time the algo was executing. The logic follows that if the actual price achieved was worse than the average price in the market, the algo did not pick the right times to execute, and vice versa.

Market reversal analyses what happens to the market after the algo is finished executing, which can be measured for both parent and child orders. So, if a client is buying and the market isn’t going higher, and is instead sitting at the same place and then dropping immediately once the algo is finished, then it wasn’t performing well. The algo could have backed off and allowed the market to trade lower and improve its average price rate.

Shterenberg explained that clients should be putting benchmarks to work to optimise two realities: executing the order as fast as possible while minimising market impact and avoiding signalling the trade to the market.

He advised using a TCA tool to first identify the parameters and time of day to execute the order, and then pick the right strategy framework.

Although the strategy depends on what is offered by a client’s provider, good providers should be able to estimate transaction costs for any strategy they offer.

At that point, it’s a matter of: picking the most suitable strategy for the conditions of the market on a particular day; executing and collecting the transaction cost analysis and all the benchmarks; and comparing the results to a pre-trade estimation of transaction costs to what was achieved, for every single trade.

“If clients do this consistently, and collect all the results and compare them to an estimation over time, they can work out the most effective way to execute according to requirements,” Shterenberg said.


Alex Shterenberg

“Algos are one of the most transparent ways to execute an order, because every time there is a transaction cost analysis report, which shows exactly where each of the trades generated was done on which venue.”

Such advice comes at a time when treasuries and execution desks are up to their staplers with regulatory requirements to demonstrate best execution, a dimension that has boosted algo adoption in FX markets because it provides an electronic audit trail of decision-making and therefore some measure of transparency.

“Algos are one of the most transparent ways to execute an order, because every time there is a transaction cost analysis report, which shows exactly where each of the trades generated was done on which venue. It tells clients exactly what they paid for each fill, exactly what the forward points were that they trade up towards. They can receive this report and always produce it to whoever approved that execution,” said Shterenberg.

“Some clients have fiduciary responsibilities to have a process in place to guide execution practices, so that’s an important consideration, of course, but I think the most important thing they consider when they choose algos is anonymity, especially for trades of large sizes,” he states.


The trade-off between time and risk that users of algos are making depend greatly on market factors, with the three most prevalent being liquidity, volatility, and price action.

With ample liquidity, the market will have more room to absorb volume and order flow with minimal market risk. The less liquidity, the wider the spread.

Wider spreads can also indicate volatility. In a volatile market, an algo that has no limit price or no worse than price can result in execution that is “all over the place”, said Soren Haagensen, managing director of Integral Development, a provider of end-to-end electronic FX trading platforms.

Meanwhile, an algo spreading out execution over a period of time risks the price moving against you, which is why price action is an important determination in the scheduling of execution.

These three market factors are key parts for any decision in whether to use algos, and what kind of algos to use, explained Haagensen. “If you have very low liquidity, then an aggressive algo can do more damage than good, as you scare away the market. On the other hand, when your algo takes a very long time, then you run the risk of the market moving naturally away from you. So it really depends on how your performance will be measured, against arrival price? Against average price?”

“You will also have to look at what are your reasons for the transaction. If you are looking to execute a larger hedge-based order, then you probably do not want to leave a market imprint, so an algo with a more passive approach, for instance, resting for periods of time in unlit venues combined with quietly taking liquidity, will in most cases be correct.”

David Ullrich

“The market is really still evolving heavily into trading more dynamically-informed algorithms.”

Aside from liquidity, volatility and price action, other factors that should be considered include spreads and momentum, and even the choice of passive or aggressive algo execution, said David Ullrich, SVP of Execution Strategies at FlexTrade, a brokerneutral E/OMS provider across FX, fixed income, equities and derivatives.

“The reality is that the better TCA systems with trade evaluation processes, incorporate all these factors,” Ullrich said.

For example, trying to execution $100 million USD/MXN at 7:00pm in Boston hardly needs to take volatility into account because there’s no market liquidity at that time.

“The problem is that there’s so much seasonality in the FX market place that time of day, choice of execution flow, and even more so, choice of execution provider is paramount.”

A good execution provider, Ullrich added, is one that has flows that are not correlated with the flow being executed. “You can have lots of volatility, but if you are trading with a counterpart that has to sell $100 million and I need to buy $100 million, we can get a better execution off of that. So volatility can be high but I can still get a fair execution where there are flows counter to what the direction of the trade is,” he said.

“All of these factors and having information across all of these factors affect the execution process, and it’s incumbent on the new EMSs and their TCAs to help present that information in a usable, real-time functionality from the trader’s perspective.”

It’s worth a reminder, he added, that algos themselves don’t necessarily produce liquidity into the markets. Ideally, they are a tool to control information leakage and reduce the execution “footprint”.

“Algos themselves can’t always make a better liquidity environment, and the reason is they tend to be directional. You need to buy 100 euros or you need to sell 100 euros, so if the market’s in total agreement that means you probably have a potential spike in volatility and a one-way market place is not going to improve running an algo. Your execution parameter in the market is simply gapping away from you,” states Ullrich.

The presence and persistence of momentum in the market at the time of execution, he noted, should dictate the decision for how passive or aggressive an algo should be.

Firms need a rigorous approach for measuring execution quality


Guy Hopkins

“Firms need to find ways of assessing the algos at a detailed level and tease out the underlying drivers for performance.”

The decision over aggression level is usually not one made by an algo, but an elective decision on behalf of the user, which is ultimately bound with their perception of risk. So, algo providers will give clients a dial where they can choose how passive or aggressive strategies will be, said Guy Hopkins, head of MFX Vector Sales at MahiFX.

The same applies to setting a limit price; it is fundamentally a decision made by the user of the algo rather than the algo itself.

The important questions to ask, Hopkins added, are: how well does any strategy aggress? How good is that strategy at being passive? If you choose passive, what does that actually mean in terms of the algo’s behaviour and how well it performs?

MahiFX has been working with clients on quantitative assessments of passive and aggressive algo execution with some illuminating results, he noted.

“People are always keen to be seen to be passive, but there is a wide divergence to the quality of passive execution. It’s not a binary thing,” he said. “Passive execution does not necessarily equate to good execution.”

Meanwhile, defining “aggressive” is also no easy task. Does it mean improving the near-side of the market? If so, better bid or better offered? Does it mean going through mid, or crossing the spread to the top of the book, or even taking out multiple levels of the far-side?

“This is really important to know, because different ‘aggressive’ algos behave differently with very different outcomes,” said Hopkins.

All this makes understanding and comparing strategies a real challenge, and therefore it’s difficult to quantify the impact of different strategies to calibrate appropriate use.

“Users of algos need to remember that they are participating directly in an environment that will respond to their activity over the longterm, even if they are one step removed from the venues in question,” he said. “Just because they are using an algo provider, it doesn’t absolve them of the responsibility to nurture the liquidity available to them.”

That means if algo adopters are too aggressive in their use over the long-term, they will potentially damage that algo’s liquidity in the same way they would damage their own liquidity if they were trading direct with a panel of liquidity provider on, for example, FXALL.


Howard Tai

“What experience tells me, and tells most veterans who have been around, is when markets dislocate you cancel anything that you have that is machine executable kill all the algo orders…

As with any advance of market technologies, there are pitfalls.

UBS’ Mark Goodman noted that the advance of new pre-trade analytics that then help clients quantify the outcome of trades is a relatively new development in the FX market, and one that is mired in a discussion over whether brokers or third-party providers should provide the new tools.

The problem with those tools, he said, is that at the point of trade clients may not have ample time to plug the necessary information into another system and carefully analyse the output of various numbers to measure expected outcomes.

“That’s not how it works on a trading desk, so I think those tools have a place but I think in terms of practical usage, the most important thing is: the better informed the client is by the broker about how the algos behave, what different parameters will do to the decision, the better outcome the client will have.”

Across the board, industry experts are warning not to lose track of wisdom gained from decades of experience in a trading environment.

Developing a systematic approach to selecting and deploying algos means tapping into that experience and judgement to create a set of default rules, said Howard Tai, a senior analyst with Aite Group, with expertise in derivatives products, multi-asset class investment and risk management strategies, as well as the role of electronic trading in currency, derivatives, and equities.

“Why are you trying to make things happen if it’s not the right time to execute?” said Tai. “The sophistication level of industry participants on proper algo trading methods is not where it should be. The message is starting to go out but I don’t think enough people have heard it, or understood.”

BAML’s Shterenberg noted that the most common mistake he sees happening is an algo that is too aggressive: “If you’re a portfolio manager and you want to do $200 million USD/SEK, and you get the order in New York in the afternoon, the liquidity isn’t very good and they haven’t done an analysis beforehand, it drives the market too much and is not as good as it could be.”

In order to address that, he recommended pre-trade TCA tools, but also being knowledgeable about the various liquidity providers: “Different banks have different strengths in different currency pairs with different liquidity. Other banks can’t take advantage of the internal liquidity when they are executing the algo, which is almost always better at the time because it results in lower market impact.”

Still, it’s likely to remain a repetitive pattern in today’s markets: people forcing things to happen during times of illiquidity. It also leaves the door open for predatory behaviour, said Aite Group’s Tai.

“It’s possible that predatory-type players will trade on market participants’ ignorance, who may have a tendency of leaving nonsensical orders, like stop loss with no limit on it, who say: ‘I am going to try to flush out those guys that really don’t understand market illiquidity and make a quick buck on them’. That is why you have exaggerated moves in certain market conditions and times zones that should not have happened,” says Tai.

When the market is free-falling, for example, generic algorithmic strategies like VWAP or TWAP “don’t do any good”, he noted.

“What experience tells me, and tells most veterans who have been around, is when markets dislocate you cancel anything that you have that is machine executable kill all the algo orders, go manual, do blocking and tackling by inserting selective limit orders to buy or to sell, and try not to use market orders because you never know where the market is. It may gap one or two percent in the blink of an eye and you won’t have a chance to react. That, I think, is the biggest message that people forgot.”

So, as more computer-driven strategies get deployed whether algo execution strategies or high frequency market-making manoeuvres market participants need to carefully consider how to alter trading strategies to avoid pitfalls during times of illiquidity, he said.


Soren Haagensen

“If you have very low liquidity, then an aggressive algo can do more damage than good, as you scare away the market.”

Industry leaders do however see an increasing sophistication in terms of decision-making as firms grapple with the daunting task of being more systematic in the choosing and using of algos.

Mark Goodman from UBS said clients are becoming more systematic in choosing algo providers, and have gone beyond asking what’s available to include questions about factors such as how the provider is organised, who runs the algos versus who provides the liquidity? Is that separate or the same group?

In addition, clients are more sensitive to what the algo does in terms of their specific objectives. “We see clients more focused on outcomes, and then questioning how they achieve that outcome rather than bells and whistles and what the features are,” he said. “The landscape is moving quite quickly, with new providers coming in. We encourage clients to think regularly about: once you have your list, how do you make sure you are able to try a new firm on a regular basis to make sure you are getting the best provider out there?”

The FX algo market, Goodman added, is entering a new phase. First, brokers and clients recognised they need algos, now clients are learning the nuances of using them. The question Goodman gets asked the most is: when should I use an algorithm?

Other advice for algo adopters focuses on establishing a policy and process for best execution.

“Regulators will tell firms what to pay attention to, but they need to decide what the benchmarks are, what they are judged on, and what is important to them,” said BAML’s Shterenberg. “They need to create this policy and make sure everybody knows it. They should always look at the results of the execution and go back to what the root analysis was and what the goals were and compare the results to the predictions and keep a record,” he said.

And for firms executing those policies, there’s many new tools to try out: pre-trade TCA, posttrade TCA, algo consultants, banks and brokers.

“Algo trading has increased dramatically over the last year, year-and-a-half. We see a lot of clients who previously didn’t think about algo execution asking questions, asking about the tools, trying things out. So, it’s definitely a growth area and the rate of growth is picking up as well.”

David Ullrich of FlexTrade is optimistic about the market’s evolution as algo functionality meets enhanced execution: “The market is really still evolving heavily into trading more dynamically-informed algorithms. Those are going to be instrumental for any trader to use as one of a number of tools within their arsenal to achieve a best execution relative to whatever benchmark they are looking to achieve.”

His advice for firms is to: establish an explicit benchmark for each individual fund prior to execution; use more dynamic pre-trade informed algorithms that are able to shift gears when market conditions change during execution; and work with a real-time TCA product that measures execution and market conditions in-flight to provide a real-time feedback loop on the execution process.

MahiFX’s Hopkins said firms need a rigorous approach for measuring execution quality that takes all the elements of cost into account not just the usage fee of the algo.

“Firms need to find ways of assessing the algos at a detailed level and tease out the underlying drivers for performance,” he said. “Improving performance against your chosen benchmark is of course the ultimate goal, but as a metric on its own it actually provides little guidance into how well the algo performed and why – it’s not the end of the story.”

Identifying those underlying drivers and applying a measurement methodology across providers makes it possible to compare and choose the best strategy given a set of liquidity conditions, he noted: “It allows our customers to work with their providers on how best to fine tune their usage of their algos to maximise performance, so it facilitates an informed, facts-driven discussion between our clients and their algo providers.”