Can you tell us a bit about the history of how you got into trading and the formation of STCM?
I’m an engineer by training and I worked mostly on environmental engineering and water science projects, designing things like dams and water treatment facilities, and modelling rainfall-runoff and flood dynamics. So that gave me a quantitative background. Around 2010 I got really interested in trading and also started meeting people who were trading professionally. Via a contact who worked for a fund I got a role doing coding, analytics, and systems automation for a fund trading desk.
From there I moved on to freelance data science consulting for financial services, where I had all sorts of clients from large funds to fintech startups, which gave me a lot of useful insight into how these worked. I did a machine learning project for a proprietary trading group (PTG) in Sydney, which went very well and culminated in them offering me a full-time role and a stake in the business. That was really my big break: wonderful job, wonderful people, learnt an enormous amount. As it was a small team, I got to see all parts of the business from research, to working on execution software, to data engineering, to portfolio management, plus some business admin.
During this time I was also running a blog called Robot Wealth (RW) that I’d started in 2015, where I was sharing some data science and quant-focused trading material, which took up most of my spare time. Eventually, for reasons that had nothing to do with my job at the Sydney PTG (which I dearly loved) and everything to do with personal circumstances, it made sense to leave my job and run RW full time. Through this I met my business partner in 2019 and we subsequently started trading seriously under the RW umbrella in 2020. In 2021, we made a sharp pivot into crypto, as that was where we saw the best opportunities, and spun up a separate PTG entity (STCM) through which we’d do all our prop trading going forward.
How does the Robot Wealth group fit alongside STCM’s business?
The idea with RW was to explore how data science can inform trading, plus provide practical code examples in R and C. It morphed into a group of independent traders that now has about 500 members all around the world. It’s not a group in the sense of a proprietary trading group, more a community of traders collaborating on ideas, development and practicalities. I and my business partner run the group as administrators and run formal(ish) courses on a wide range of subjects, which with broad community engagement will output tradable ideas and (where appropriate) the necessary code/tools. In addition, people within the group will often also form their own smaller groups to explore other ideas. The output and code from some of these sub-projects might also be shared with the main group.
Most of the things that we explore with the group fall under three broad categories:
- Edges or trading ideas that are big enough that the community can participate in without getting in each other’s way
- Edges that can be realistically managed by someone who is committed to trading, but who also has a job, family and the usual commitments. Think liquid markets with little to no execution finesse (e.g. market on close orders).
- Tools and techniques such as data engineering, data analysis, trader smarts.
We trade many of the things we’ve explored with the group in STCM, but also things that don’t fall into the categories above (e.g. high frequency crypto latency arb).
What asset classes and strategies is STCM trading?
- Simple, low-frequency strategies in liquid assets that can be managed without a lot of screen time, which is also what we specialise in with our RW membership group
- A portfolio of low-frequency FX alpha trades
- Equity pairs trading
- Trading volatility products – in particular, timing the volatility risk premium
- Seasonality effects and noisily predictable rebalance flows in liquid ETFs
- Crypto trend and momentum
- Some higher frequency arbitrage strategies in crypto
How did FX come to be part of this mix?
Way back when I first got interested in trading, I was doing a certain amount of what was effectively retail FX trading, but I got some valuable insights from my professional trading in Sydney. Our PTG had a CME seat, so were using DMA for FX futures, and we were active on a number of futures exchanges around the world. We were doing cross exchange spreading between various futures and would also sometimes include futures with an FX leg, either as a hedge leg or as a leg of the spread itself.
What’s the process for developing new FX alpha algorithms?
Occasionally, the starting point may be an inefficiency that’s appeared in an academic paper, but most of the time it’s something that has cropped up in conversation with somebody else or something we’ve already observed or hypothesised about that we want to explore further.
I think the key is turning initial observations into potential trading ideas. For instance, we were wondering if there was an exploitable reversion opportunity between groups of currency types. We used clustering of FX pairs to explore this in R. We ended up with some European currencies in one group, commodity currencies in another and Japanese yen all on its own. That prompted us to look at the idea of building a mean reverting model between baskets of commodity and European currencies.
Is there a real-world premise like that underlying all your alpha algorithms?
Yes. I’m open minded to other approaches, but we generally want to have a real-world basis for why an inefficiency might exist and what causes it, such as large-scale institutional behaviour, or traders with different motivations to us. Some in the professional trading world take a similar view to us on this, but others are just happy to data mine an inefficiency and trade it regardless. Both approaches can work, but if you have a real-world basis you have some idea what’s driving your returns, so if that driver changes you have early warning that your alpha may be about to degrade. That sounds a lot more certain than it really is of course. You never really know exactly what’s driving your returns, unless you’re doing something super obvious like hitting stale quotes, but at least having that frame of reference helps you make more informed decisions in real time than if you were relying on recent trading returns alone.
What about FX execution algorithms?
We have a pretty diversified portfolio of alpha algorithms and markets, so as regards FX we simply minimise our execution risk by keeping our size per algo (we currently trade five alpha FX algos) pretty small. The bulk of our trading is alpha trading, so we’re happy to pay up to get the trade on relatively quickly, because a good alpha signal often decays the fastest, so we don’t want to be waiting around at the top of the book trying to finesse our execution. Since we’re small in any single trade, we tend not to push the market around. I noticed that this approach was pretty common among the PTGs I interacted with professionally, including the one I worked at in Sydney. People realised they could probably squeeze more alpha out of an idea if they spent time on enhancing execution, but generally they came to the conclusion that the amount of work required didn’t justify the additional P&L opportunity. So they preferred to size trades to minimise execution risk and used the time saved to work on the next alpha idea. (There were exceptions to this of course.)
By contrast, prior to the collapse of FTX, when we were doing larger volumes of cryptocurrency, we spent a lot of time thinking about execution risk and algorithms. Those markets were extremely illiquid and we often represented a significant proportion of the day’s volume. A lot of the click trading that we were doing was both driving P&L and acting as an exercise in information gathering for designing execution algos. We spent a lot of time building something low latency that we could run on FTX, which unfortunately then became irrelevant. I don’t see it as wasted time by any stretch – I learned an absolute ton through this process that I’ll certainly use again.
What’s your development process for new alpha algos?
Once we have the initial idea/inefficiency and its possible cause, we then start with some really basic data analysis in R. The objective is to simplify things as much as possible and isolate the edge we are looking for. We’re not expecting a cut and dried situation, but we’d like to see our hypothesis about the idea’s driver play out in the data, however noisy that edge may be. We’ll commonly start with some simple scatter and factor plots, plus data bucketing over time to explore whether the inefficiency has been persistent. In simple terms, we treat this analysis phase as if we were curious scientists seeking to find out as much as we can from the information at hand. It’s iterative, and the analysis process often reveals the next question.
A big part of this process is looking for tests that would disprove our hypothesis. For instance, with the commodity currency idea mentioned earlier, this might involve replacing some of the commodity currencies with two random ones. If the effect still exists, then our original premise is clearly flawed. If there’s a creative aspect to data analysis and experimentation it’s this – coming up with little hypotheses and mini experiments to guide our research.
Once we’ve reached the stage where we feel reasonably confident, we may start click trading the idea in small size alongside further analysis. That gives immediate clear feedback about both the idea and its practical trade execution. Trying to do trade execution simulation as part of the initial data analysis process would be a significant task in terms of the data involved. The costs and time required would also expand significantly as well. Click trading gives real-time information on how execution is likely to play out, plus potentially additional insights. If the data analysis and the click trading results work out and it’s something we want to automate, we’ll then write some execution code to handle it.
Finally, on that last point, what’s your take on the trading technology and brokerage available to small/mid-sized prop groups such as STCM?
To be frank, we think there’s a significant gap in the market here – our demographic is not well served. If you want a broad range of markets plus an API and reasonable commissions there are very few choices and often the associated technology is not robust and the support is worse, though there are some exceptions to this, particularly in the futures space. I think that’s why some PTGs I know that are slightly bigger than us prefer to build some or all of their infrastructure in house, so they have more control. The off the shelf tools and brokerage APIs currently available to us are suboptimal, which is one of the reasons if we automate things we prefer to stay relatively low frequency. Unravelling tech glitches when you’re dealing with hundreds or even thousands of orders is just not worth the grief. There’s definitely an opportunity for someone to step in here and provide a combination of brokerage and tools that is focused somewhere in between retail and large institutional traders.