Patrick Fleur

The liquidity hunter: a conversation with Patrick Fleur

August 2023 in Buyside Interviews

When you talk about sophisticated traditional buy-side players using algos, one name sticks out in the foreign exchange space. Patrick Fleur, head of trading and execution for Dutch asset manager PGGM, has close relationships with virtually every major bank that’s producing algos for FX trading. He is constantly testing and evaluating them and he has dozens of algos to choose from. So what does Patrick think about the evolution of FX algos and what the key issues are now? Adam Cox reports.

AC: How would you describe your job?

PF: What my job should be about is finding liquidity for the firm. That’s basically in one sentence what my job should be. That’s actually the challenge with a market which is so fragmented and specialised between the different asset categories. We’re running trading across all assets, which means commodities, credits, fixed income, foreign exchange, equity and real estate. What we’ve found is that every asset class tends to treat their category rather differently. Wherever we do see the overlap, my ambition is to create one kind of order flow, one deal flow, and also to create similar liquidity pools across all assets. Internally, we have 18 investment teams: portfolio managers who are basically my direct customers. And then we have pension fund clients who invest in those 18 funds or in segregated mandates also run by one of those 18 teams.

AC: So this is all about execution rather than alpha?

PF: There are two sorts of alpha. The reason why my title says head of trading and execution is because we also run a trading desk. In our case we run all foreign exchange as an internal book, so every portfolio manager who wants liquidity in foreign exchange does get a firm internal price which we’re warehousing. We think it’s a unique approach, originally coming from the fact that FX wasn’t traded as an asset. My predecessors didn’t have a proper way to solve this operationally and therefore they chose to have an internal book against it.

AC: What sort of flows are we talking about?

PF: Within FX we hedge the top 14 currencies for our clients. All funds internally are 100% hedged, so the client gets a euro-based performance and what we do in the external world is only hedge the top 14. The dollar is our number one currency, and the Korean won and the shekel are the least liquid, or the smallest we hedge. It basically gives us a spread between emerging and non-emerging currencies. The mandate we run though, is across all currencies. In other words, we can take a position in every currency, also the emerging currencies, as long as they’re not restricted. The spectrum of where we can invest, where we can create alpha is the full spectrum of currencies.

AC: How does algo trading fit within that framework?

PF: The way we see foreign exchange is separated into two disciplines. One is our spot or outright risk and secondly the interest rate components coming from our roll. We have a hedge percentage per currency which we roll on a three-monthly basis and I do think it’s a very different discipline in terms of market fragmentation and liquidity. So when we talk about algos, it’s predominantly in the spot market. We are dying to see a market structure where electronic trading and algorithmic trading in forward markets is also possible. But mainly because of the primary liquidity being very scarce, the main liquidity provider does not have enough tools to stream such electronic algo prices in the forward market. It goes pretty well up to a week in most currencies and up to a month for some others, but three months is too far out. There’s no electronic liquidity.

We really split the book into two parts, a spot book and a forward book. It’s very similar to what a bank tends to do so we really treat, let’s say, a swap order very differently to a spot order. Swaps are also all internally priced against a benchmark. That’s basically a quantitative exercise where all expirations coming from the different teams will be rolled automatically against our book and we roll the bulk on a manual basis. In other words, there is no liquidity in the three month space in our size. We’re talking about a 180 billion euro fund. The top 14 is hedged and that’s roughly 65% of our total assets.

What an algo is supposed to do is to support us in finding the right liquidity and in the easiest and most transparent way possible. We think an algo should be replacing the task of a human being, in other words it should do something which is replicating what a trader tends to do. And if you look at the evolution of algos, the first one we did was in 2001 which was a very simple un-weighted average price algo. Basically saying every few seconds, every few minutes, it just drip-fed an equal amount into the market. That’s really the first algo we ever did.

AC: That’s the sort of algo that right now would just get picked off.

PF: This is 13 years down the line, the moment you would implement such an algo, the signaling risk would be 100% and everybody would front-run This explains why you see a new generation of algos. The one just described is first generation, which is very much linear… not something you could use anymore.

Then you end up with the second-generation algos which try to either become more advanced in terms of deviating from time weightings. You get volume-weighted or you get tick-weighted algos, which are still very much passive. In other words, they’re not biased towards trends, momentum or anything – they just do what they should do on a more weighted approach. And then you get the more advanced ones, which do exactly the same but next to that they try to capture the bid-ask spreads. So they have developed strategies which can capture bid-offer spreads and thereby reduce my costs.

And then you get a completely new tier of algos and these are trying to get rid of front running signaling risks by using randomisers – in terms of what are the principals they use, in terms of how much of the given liquidity they take off the table, what are the source codes they stream those algos from.  In other words, you have a unique identifier which everybody can see if you get into a liquidity pool. By using different IDs, you also prevent signaling.

AC: So these are much smarter algos?

PF: We do think these algos are kind of unique to foreign exchange and people always say that the equity market is more advanced in terms of algorithmic trading. These kinds of algos are, we think, leading the industry. They’re getting more advanced than we’ve seen as a category so far.

We have very close relationships with probably the top 15 banks on e-trading and algo trading. They all know us and we know them, both in Europe and the US. We work closely together and given the fact that we have an active currency book, I’m one of the few people on the planet actually who can test out the algos for banks. There are only four big clients worldwide who have such a book and who can take active positions.

PGGM is a cooperative pension fund service provider, based in Zeist, the Netherlands, operating in the Dutch market. It offers asset management, pension fund management, policy advise and management support. 

AC: How would you describe the benefits you’re getting from these algos?

PF: That part is still, we think, the missing piece of the puzzle. The reason why algos are still not dominating the currency market yet is because of the decision-making process. If, let’s say, you get a big internal order, you have to decide when you want to do it and how you want to do it. This is the part where the human factor is still dominant, and in my opinion too dominant. You have to make a decision first with whom you want to execute and then what the timeline is, what kind of risk you’re willing to allow to get this implemented. Secondly, which algo is the best to create that performance? And this we think is the missing part in the algo space. We call it a smart order routing process. If you want to implement a certain order, what’s the best execution policy? Given the history of benchmarking manipulation we’ve seen over the last year, it becomes quite interesting to see what the best execution is on behalf of our clients. Is it by using a long-running algo, giving a replication of the average price to date? In other words, it has no skew to the up or downside, and is actually trying to replicate the average price of the day. That could be a strategy in itself, where, in other words, one doesn’t have an opinion.

It’s very difficult to say when you add value and when not.  We think that the direct benefit is that we can just leave orders and walk away. And those orders are in most cases not visible for risk-takers. In other words, they’re dependent, waiting to implement if the level or the time is right. We think that’s the most direct benefit that saves us time and money.

There is a direct cost involved. The big challenge is the correlation between the cost of one particular algo and the direct benefit. Currently it’s not consistent.  Think of a fee structure which is fair. There is no exercise which says that if you take 15 bucks a million, it’s because you save 30, for example. It’s more a relationship between expenditure on the development of the algo than how much does it save us. So the cost side is pretty much skewed one way, the benefits are not clear at all times.

What we’ve tried to do is the moment someone launches a new algo the first question we ask is, how much are we willing to pay for it?  Which I think is a good approach. It’s more a supply and demand factor. As long as algos get priced fairly the market will consume them. That’s more or less the philosophy of most banks. To do this comparative exercise becomes very difficult. First of all, you’d have to split every order into more components to give you some comparison and then you’d have to ignore the fact that you might have signaling risks. That’s where we think it’s too challenging. But it is part of what we do think is the next step, which is going back to the smart re-routing principle because now one has to make too many choices which is all too manual.

Basically when we receive an order, we can execute at market, or we can use an algo. And before we can make the decision we should know two things. One, what is the bid/offer spread in this currency for this particular size? So for a 100 million euro/sterling let’s say it’s 2.7 pips. Or, how does this 2.7 translate into a time factor? In other words, how much will sterling move in a second, a minute, five minutes, before this bid/offer spread is equalised by the time component? If we don’t do anything for 10 minutes, how much risk do we actually add in terms of expected market movement? Let’s say this 2.7 pips is similar to 3.45 minutes, so basically our decision-making process says, we can do two things, hit the risk price or use an algo which cannot run longer than 3.45 minutes because then we’re actually going to add risk which is not being compensated by a premium. At least that’s what the forecasting model will say. This gives us one factor|: what does an algo bring? It will try to save us a bit on spreads, if we keep to the principle, which is, do not extend the duration of the algo beyond a certain point because then you actually are adding risk. This is in essence what an algo should do:  get us liquidity at a lower cost. And there comes the relationship with the cost involved by using algos, the premium. So that should be deducted from the initial spread.

In your end-of-month or end-of-year performance you should see that reflected: how much risk did you add versus how much revenue did it generate? This is an exercise we think not a lot of people do. Most banks ask us to do their test work and then they will come up with modifications to improve their service. Because we are engaged in this development process right from the start we get tailor made solutions from them. That is an important benefit for our clients.