The FX market is undergoing rapid, transformational change. A number of factors are driving this change, including regulatory developments, fiduciary responsibilities, increased focus on performance and cost and a general demand for improved transparency. This confluence of factors is resulting in an increasingly complex marketplace for participants to navigate, for example:
1. Increased variety of execution methods (voice vs electronic, RFQ vs streaming, principal vs hybrid vs agency etc)
2. Increasing fragmentation of market liquidity
3. Proliferation of algorithmic execution products
This article explores the increasingly difficult execution decision making process, including algo product selection, that market participants are faced with. This decision making process has a number of determining factors. For example, the underlying purpose of the FX transaction in the first place is a key driver in the execution choice. FX is a unique marketplace in that currencies are both perceived as an ‘asset class’, and traded with profit maximisation as an objective (‘alpha’ trades), and also a ‘utility’, i.e. required to fund an underlying international transaction (‘beta’ trades). This underlying purpose can aid the process of appropriate execution benchmark selection, which is a key component of the algo selection decision.
For example, if you are an USD-denominated equity fund manager who is looking to buy JPY and sell USD in order to fund the purchase of Japanese stocks, it may be reasonable to benchmark the FX transaction to a form of average price that is computed over the hours that the Japanese equity market is open. If, therefore, the benchmark that is chosen is a time-weighted average price (TWAP) computed over several hours, it may not make sense to then select an aggressive algo type which would typically complete the order in several minutes.
Figure 1 provides a stylised flow chart for the decision making process, the components of which will be explored in more detail within this article. It is obviously not the suggestion that such a decision-making process is followed for each transaction! The process is divided into two sections; strategic and tactical. The strategic element would form part of an institution’s execution policy, whereas the tactical element is where value can be added on a day to day basis through informed, real-time decision making. Appropriate algo selection at this point can add significant value to the process, and through rigorous post-trade TCA, it is possible to quantify and verify this value-add whilst also providing insights into how to continually refine the process.
EXECUTION BENCHMARKS
The selection of appropriate execution benchmarks for currency transactions has become a key issue within the industry. Unlike in the equity markets, where there are defined market open/closes and a central tape where every transaction is printed allowing the construction of accurate VWAPs (volume weighted average prices), the OTC nature of the FX market has resulted in a dearth of independent benchmarks that are standardised and recognised across the industry. In addition, due to this OTC characteristic, the FX market does not have a ‘National Best Bid and Offer’ concept as in the equity markets.
Classifying the currency transaction as ‘alpha’ or ‘beta’ is a key first step in benchmark selection, and hence subsequent algo selection. The table below summarises typical benchmark types per category:
Benchmark selection is critical and, in the case of asset management, it is essential that the selection is appropriate for the portfolio mandate, and serves the purposes of the asset owner. Further information on benchmarks can be found in the FSB Working Group paper[1], and in the QSI White Paper from March 2014[2].
TRADING MANDATE, OBJECTIVES & CONSTRAINTS
Once the trade purpose and benchmark are determined, it is then time to consider other objectives or constraints that may be imposed on the execution process. For example, it may be that technology or process constraints within an asset manager’s order management systems do not allow full details of the currency trades to arrive in sufficient time to initiate a TWAP trade to coincide with the underlying securities transaction. Real-world practicalities may require deviation away from theoretical best practice.
In addition, the mandate of the execution desk or Treasury function, obviously needs to be taken into account. When determining the mandate, the following types of questions need to be resolved:
1. does the execution policy allow the desk to run tracking error from a given benchmark?
2. does the desk have a mandate to run market risk and seek to add value to the trading process?
3. is the number one objective of the execution process to minimise market impact, or footprint, thereby reducing the signalling risk from the activity?
4. does the trading desk have discretion in the timing of execution?
5. does the desk have the mandate to interact with the interbank market directly?
Once the strategic framework is understood, it is imperative to then take into account the prevailing market conditions, i.e. overlay tactical considerations. This tactical element has become even more critical over the last 12 months given the increasingly challenging liquidity conditions within the FX market.
MARKET CONDITIONS
It is clear that the market has transitioned into a new liquidity regime in 2015, exemplified by higher volatility and reduced liquidity supply, resulting in the widely cited liquidity ‘air pockets’. Figure 2 illustrates this for EURUSD, comparing daily volumes vs volatility for periods 2013-2014 and for 2015. The QSI White Paper published in May this year[3] discusses the new regime in more depth and introduces a framework for better understanding the prevailing liquidity environment.
There are a number of key market factors that can add value to the execution decision process, and selection of a suitable algo type. Price action, order book depth, order book imbalance, short-term volatility and liquidity are all components that can inform the trading decision. Accessing such data in real-time in a fragmented, OTC market such as FX is obviously far from straightforward, but when it comes to algo selection one could argue that the prevailing liquidity conditions, and volatility, are probably the most critical elements.
Traditionally, liquidity has been measured by traded volumes, or where the data is unavailable, some form of proxy via tick data. Traded volumes, however, are only one part of the liquidity story and really just provide a measure of the demand for liquidity. It is also valuable to taken into account the supply side of liquidity, and a useful metric to proxy this is the ‘sweep-to-fill’ (STF) cost of the aggregated market order book. STF cost is calculated by observing the limit order book and, assuming 100% fill ratios, ‘sweeping’ through each level and computing a VWAP for a given size. Subtracting the current market mid from this VWAP provides the STF cost, a metric that is directly observable and takes into account the current depth of the book together with market-maker risk appetite. When there is an increasing concern of the market gapping, for example when leading into an economic data release, the order book thins out and STF costs will typically widen as risk aversion increases. Therefore, combining STF cost with traded volumes to provide a view of liquidity in two dimensions can be especially valuable when making a trading decision.
Figure 3 provides an example of such a 2-dimensional approach, with volumes plotted on the y-axis, and STF cost on the x-axis. The chart is then divided into 4 regimes, with the current state of the market shown by the blue dot. The other dots show the liquidity conditions at the same time of the day over the last 3 months, which helps put today’s market into context.
The 4 regimes are summarised in Table 2. Unsurprisingly, the realised volatility experienced in each of these regimes also tends to be distinct, and this information can all add significant value when deciding how best to execute.
For example, if the market is within the ‘red’ regime at the time of execution, and historically the market has been typically in the ‘green’ regime at that time of day, the most appropriate execution may be not via an algo at all. It may be beneficial to either transact via risk transfer given the unusually low volumes and heightened risk aversion, or if there is discretion to do so, wait for conditions to improve.
Assuming the market is in one of the ‘green’ or ‘amber’ regimes, then an algo may be an appropriate execution choice. The question then becomes, which algo type to select? For simplicity, let’s assume that a trader is benchmarked to mid-market arrival price and is therefore looking to select an algo designed to minimise the slippage to this benchmark. A key variable to choose is how passive or aggressive the algo should be, as there is clearly a trade-off here. Choosing a more passive strategy increases the chance of spread-earning, which is important given the mid benchmark. However, a more aggressive strategy would complete faster and therefore take less market risk, reducing the risk that the market moves away from the price at inception whilst the algo is in flight.
Pre-trade and real-time execution analytics help traders make this type of decision in a more informed manner. If the prevailing liquidity regime at the time of order inception is in the top right ‘amber – orange’ zone, with relatively high volumes but also relatively high STF cost, then it may be appropriate to choose a more aggressive setting for the algo given the increased market risk. Such a choice may result in increased market impact given the relatively low liquidity supply, but this may be more than offset by the reduced market risk. However, if the regime is ‘green’, then a more passive style may be better suited to take advantage of the market conditions, i.e. trade a little slower and attempt to earn more spread given the improved supply and lower market risk. As previously discussed, such choices in terms of prioritising impact vs risk are also informed by other factors, such as the execution desk mandate (e.g. some desks have a specific tolerance of market impact that they are comfortable taking, whereas for others the objective may be to minimise impact).
Given the array of factors influencing the decision making process, and the wide array of execution methods and products now available, a systematic approach to determining what works, for which pairs, trade sizes, time of day and in different conditions is advocated. Storing transaction details, performance metrics and data regarding the prevailing liquidity and volatility conditions allows for future empirical analysis, once the sample size is large enough, to help refine the execution process.
Market professionals make these types of decisions all day every day through experience and intuition. However, using analytics within the decision making process can help justify the intuition and provides a structure for generating a reproducible and systematic execution process that can add demonstrative value and help satisfy best execution responsibilities.
The views and opinions expressed in this article are solely those of the author.
[1] “Foreign Exchange Benchmarks”, Financial Stability Board Working Group, 30th September 2014
[2] P. Wikstrom, J, Chen & S. Tiong, “ Benchmarking FX Execution – A Discussion of Different Benchmark Styles and Approaches,” Morgan Stanley QSI white paper, 24th March 2014
[3] J. Chen, P. Oudshoorn & P. Eggleston, “Measuring Liquidity in Two Dimensions”, Morgan Stanley QSI white paper, 26th May 2015