Adaptability through the Pandemic: Poetry and Machine Learning

Image of facial mask

The material below was prepared by Millburn. Please see the important disclosures appearing here (https://www.millburn.com/disclosures) and at the bottom of this page. PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. THE POTENTIAL FOR PROFIT IS ACCOMPANIED BY THE RISK OF LOSS.

A Q&A with Barry Goodman and Grant Smith

Machine learning takes its lessons from data. And while history rarely repeats perfectly, in words attributed to Samuel Clemens (a.k.a. Mark Twain), “…it often rhymes.” The beauty of machine learning, in our view, lies precisely in its ability to find these rhymes—repeatable, tradable patterns in market behavior teased out, tested and confirmed through often decades worth of data.

History may rhyme, but some rhymes are easier to spot than others.

To extend the metaphor, some rhymes are easier to spot than others. For example, it doesn’t take high-powered computers or a sophisticated statistical learning approach to find a basic relationship between, say, low global supply of coffee beans and rising prices. Think of this as a traditional rhyme. Simple, understandable, but perhaps a bit limiting.

But what about when you consider the effects of seasonality (winter and summer demand patterns)…is the relationship between inventory and price still valid, or is it conditional on time-of-year? And if we also consider shifting market sentiment towards coffee consumption, does the relationship strengthen or weaken? Now we are venturing into the land of the Shakespearean sonnet. While more complex, and requiring perhaps more effort to construct, it is a different and potentially very powerful way of examining the world.

Advantages of machine learning approaches include this ability to uncover complex relationships between different data inputs (or “features” in machine language parlance)—put another way, these approaches are well-suited to extracting signals from noisy data sets. And the data sets themselves can be vast, meaning the strategies can have long memories. Millburn’s approaches often utilize 10, 20, 30 or more years of data. Finally, the strategies can learn over time, adapting autonomously as market conditions change.

Of course, no strategy is perfect. While learning from history is, we believe, a very logical and powerful approach to predicting price movements in markets (and one that has proven profitable since we began using it in 2013), it is simultaneously a potential weakness that must be carefully considered. Specifically, in order to be most accurate in these price forecasts, we want: a) to include all potentially relevant features (i.e., drivers of return) in the models’ training sets; and b) environments today that are (somewhat) similar to the past…i.e., environments that “rhyme.”

March 2020 provides perhaps the perfect case study to demonstrate both strengths and weaknesses, and the opportunity to think about how to continue to improve. Below we present a Q&A with Grant Smith and Barry Goodman, Millburn’s co-CEOs and senior members of Millburn’s Investment Committee.

Q.

How did your machine learning strategies fare in March?

A.

[Barry Goodman] We run a number of programs at the firm, but the common thread is that the active risk-taking—the “signals” that we generate that tell us whether to go long or short, or whether to take a more opportunistic or defensive stance in a market—are all driven 100% by machine learning technology.

Performance in March differed by program, with some posting losses that were larger than typical. This unfortunately didn’t help those investors who were looking to us to potentially provide some relief from losses they were seeing elsewhere in their portfolios. We felt those losses too, as substantial investors in our programs ourselves.

While the losses in the underperforming portfolios were not pleasant, two of our portfolios actually posted their best ever and second-best ever monthly returns since inception (2016 and 2005, respectively). So for anyone who asks the question “is machine learning broken,” we think the answer is clearly no. But this doesn’t mean we aren’t working harder than ever to analyze, understand and make improvements.

For strategies that underperformed, what was the reason? What has your analysis found so far?

[Grant Smith] In terms of attribution the source of the pain in the portfolio was really focused in one sector: equities. If you followed the equity markets closely, as the pandemic accelerated we started to see some very unusual behavior in many—actually almost all—of the equity markets that we trade. As an example, at one point in March we saw the S&P500 drop more than 28% in a matter of 13 trading days. This was something truly without precedent. Using the S&P as an example again, we also saw material, sustained volatility, with daily price moves exceeding 10% both up and down. Again, this was behavior we had never seen before. These are just two examples in only one market but we were seeing this type of activity practically everywhere in the equities sector across all regions.

So was the strategy able to make good forecasts in these conditions?

[GS] The accuracy of the models’ forecasts in the equity sector was certainly worse than we would have hoped. The strategy learns from history. Models analyze decades of historical data and try to find tendencies, or patterns. The systems use these patterns to make return forecasts, which result in the generation of trades and positions. This can work well when something similar to the current environment also occurred in the past. But if we enter an environment that is completely new, or at least very different than the past, the models are going to have a harder time finding good matches with history. Their forecasts may be right, they might be wrong, but they will most likely be less accurate.

The strategy learns from history. Models analyze decades of historical data and try to find tendencies, or patterns.

[BG]  I think it is important to reiterate that it really was overwhelmingly the equities sector that caused problems, and this sector was where we saw this unique market behavior that simply was not seen in the training sets for these models. And it really was a coordinated move across practically all global equity markets, which meant our diversification by geography, which normally would help, did not in this case. On average, the other sectors in our diversified long/short programs were able to make good forecasts and were positive. But certainly, the models had difficulty in equities.

In equities, then, what were the models seeing?

[GS] Our research has shown that, based on history, swift moves in equities, either to the upside or the downside, are typically followed by the market reverting to the mean shortly thereafter. This is based on decades of data and is a particularly strong effect in equities as opposed to other sectors. These mean reverting tendencies can be reinforced by other factors that have historically resulted in rising equities, such as falling interest rates or cheap energy. Given the full line-up of data inputs the models considered, they determined that we should take long positions in most equity markets and generally very high-conviction positions at that. There were some models that were forecasting prices to fall, but the strength of these high-conviction long signals really carried the day. So what this meant is our systems were expecting the markets to rebound. The rebound happened eventually, but in the meantime, the portfolio was quite stubborn and took some unusually large—though not unprecedented—losses before coming back a little at the end of March.

Were there automated risk controls that kicked in during this high-volatility period?

[BG] Absolutely. Our investment process takes a risk-based approach to portfolio allocation. This means we budget a certain number of contracts or securities that we can hold based on the perceived “riskiness” of the asset. Automatic adjustments occur to that budget when the short-term riskiness spikes. As an example, let’s assume that under normal circumstances, our systems might be allowed to hold a maximum of 70 futures contracts in the S&P500 Index in a 100 million-dollar account. If volatility in the S&P500 were to rise substantially, the systems would automatically reduce this 70 number to something much smaller in order to maintain what it sees as “constant risk.” So those automated risk controls did indeed reduce exposures as volatility spiked. This happened market-by-market. However, in March, the portfolio as a whole experienced volatility that rose incredibly rapidly to extreme levels, and it became clear that even these automated cuts in individual markets weren’t going to be enough.

Were other actions taken?

[GS] Oversight by our Investment Committee (the “IC”) is a constant part of our process. So even though we aim to be fully systematic, there are always experienced individuals—including the IC members but also other members of our risk, operational, and management teams—who are watching the portfolio. The IC meets regularly but much more frequently during unusual situations. As an example, we met prior to Brexit, and prior to the last presidential election here in the US. So, as you might expect, we were meeting very frequently before and after the declaration of the pandemic.

What did the IC look at in making the decision to step in and cut risk?

[GS] Key quantitative metrics the IC watches include the short-term volatility of the portfolio, and exposure levels. In this case, portfolio volatility in general spiked to levels beyond anything we had seen in recent history, peaking at levels almost double that seen even during the depths of the global financial crisis, which was the last time that the IC had manually intervened.

[BG] In this case the IC made the determination that, based on the data we were seeing, we should temporarily take some risk off the table until volatility falls back a little closer to the standard range, and until our models started to be a little less stubbornly long equities. It wasn’t an easy decision to make but the environment was clearly quite unique and, further, was still very uncertain.

By the way, we used some of the same metrics to determine when to re-risk the portfolio and get back to what we would call “standard” exposures, which we treat as an equally important decision. All-in-all the de-risking and re-risking played out over the course of a little less than one month. During this entire manual intervention we never contravened the direction of our models’ signals—we only took steps aimed at controlling exposures.

[GS] That’s correct. Today, we are back to operating with standard risk as called for by the automated processes. The equity sector models are operating normally and as Barry said are now generating signals that are diversified as opposed to being highly correlated.

Have the models “learned” from these market events?

[BG] One of the powerful things about these models is their self-adapting properties. This is one reason we think investors should have learning strategies in their portfolio, especially in times of uncertainty. In normal market environments, our models are “re-fit” or “re-learned” periodically throughout the year. So maybe every few weeks a subset of the models in the portfolio are being rebuilt, essentially folding-in the latest several months into the training set and asking the machine learning algorithms to determine whether the drivers of return are changing.

These models can self-adapt. This is one reason we think investors should have learning strategies in their portfolio, especially in times of change and uncertainty.

 

[GS] Yes, each time you fold that new data into the training set and apply the machine learning algorithms it will probably mean the models will change a bit—the process will find new, or slightly different, patterns or will make slight adjustments to what it sees as the influencers of market movements. In a case like we saw in March, recent market behavior was so different from what the models thought was the structure of the market, a refit would certainly impact the models' structure. Changes will be slow, in general, but can be faster when you encounter unique and/or very extreme moves.

[BG] Yes so to Grant’s point when we see unique market behavior, one of the first things we can do is accelerate this “re-fit” schedule and refresh models to enable them to learn immediately. It is a way for the systems to keep up with a changing environment, and we believe potentially one of the key advantages of our approach.

You are re-learning (or “refitting”) the models using March data, but some people think that March was a true “one-off” situation that may never be repeated again for 100 years. Others disagree. How do you address this tension in your re-learning process? How do you make sure you don’t put too much weight on what may be a very unique period?

[GS] Time will tell whether this was a one-off or whether we will see variations of March 2020 going forward. Certainly, it was a unique moment in history relative to the data on which our models were trained. So when we re-fit, it will have an impact. That doesn't mean all other data is forgotten or that the fitting process won't recognize the uniqueness of this event. but at least the models will be aware that such an event is possible.

Having said that, there are many statistical techniques we use to avoid the idea of “overfitting” to any particular period in the data, which is a key consideration in any machine-learning approach. Even when we re-fit, March data will, by definition, be recognized as a rare set of observations. Some of our models will exponentially weight recent history, others will not. Some models use 5 years of history, and as a result, March will be more important, potentially, but other models use 30 or 40 years of history, and therefore March will probably have less influence.

Time will tell whether this was a one-off or whether we will see variations of March 2020 going forward.

[BG] But we want to include it, because that is the whole point of the machine learning technology—to learn from history and not form a biased or discretionary view on whether it was an outlier or something more meaningful or more likely to repeat. We let the data and statistics lead us.

So were something like March to happen again, would the models react differently?

[GS] Yes, they would, for the simple reason that the models now have March data included in their training sets, so they are considering this. They have indeed learned from March. That doesn't mean we wouldn't still be long equities in a similar situation, but the signals would almost certainly be less stubborn.

Each environment is unique, however, so we don’t focus on perfect repeatability, which really never happens, but rather on setting parameters so the techniques can look for “close approximations” to a match. So as I said while we wouldn’t necessarily expect to see massive signal changes in future situations that are similar, simply because the models won’t necessarily want to overweight March, the models should be better positioned given the additional data.

We don’t focus on perfect repeatability, which really never happens, but rather on looking for ‘close approximations’ to a match.

But there are other things that could also have an impact on how the systems react. A different volatility model might be faster to restrict maximum positions, for example. The research team is constantly looking at ways to generate more accurate forecasts, and ways to better manage risk.

What research innovations are in the works?

[BG] First, what we are not doing is making any wholesale changes to the model and research process. We continue to have confidence in these strategies, which have been highly researched, and indeed we believe our core programs have delivered some very good results since the implementation of our machine learning framework more than seven years ago. So while March has energized us to move quickly on some interesting projects, we don’t want to overcorrect.

Of course, this shouldn’t be taken to mean we are suggesting that investors discount any losses or minimize the accomplishments of those managers who did well over this period. But simply put, we don’t believe forward-looking investors should choose a manager based purely on how well they did during March.

[GS] Yes that’s exactly right. But on the “interesting projects” front, we are close to implementing a proprietary model that adjusts exposures based on volatility estimators. This would essentially provide a systematic, automated mechanism for de-risking during periods of stress, and for reverting as the stress abates—in essence systematizing the more manual risk-reducing interventions of the IC. So while the IC oversight will of course remain, this should provide a faster, and potentially easier to quantify, process. We are also doing research that uses machine learning to forecast volatility, which may help us scale in and out of positions more effectively, among other things.

And finally, there is always an ongoing search for new factors to better understand market drivers, new machine learning techniques, and new markets.

[BG] March was difficult, but it was not the first difficult period we have gone through in our 49-year history. The key is how do you learn from these events, and improve.

What do you see going forward? Will your strategies be well-positioned in what looks to be a period of uncertainty?

[BG] The world has changed, and at least for the near term, we are all making adjustments in our personal and professional lives. Nothing is certain, but right now, from a market's point-of-view, it looks like we will be entering an era of increasing frictions, including across geographies. Economies, currencies, and asset classes, in general, seem to be heading toward a decoupling. There is massive fiscal and monetary stimulus that has been put into place as governments and central banks seek to prop up the economy, while at the same time forces of stalling global growth and increasing unemployment act to suppress it. In my nearly 40-year career I have never seen such a strong confluence of market conditions.

We think this will demand an adaptive approach that can trade a range of instruments and find alpha wherever it occurs. A portfolio that has opportunities to trade a variety of asset classes—including equities but also sectors like currencies, commodities, and fixed income, for example—and a portfolio that trades cross-geography will be potentially well-positioned, in our view. We believe less flexible rules-based approaches or approaches that can’t make use of the growing amount of data available to us may be less effective.

[GS] Like our models, we are continually learning. ●

The future may demand an approach that is adaptive, that can trade a range of instruments and find alpha wherever it occurs.




***

IMPORTANT DISCLOSURES

The information provided is accurate as of the date indicated and may be superseded by subsequent market events or for other reasons. Charts and graphs provided herein are for illustrative purposes only. The information in this presentation has been developed internally and/or obtained from sources believed to be reliable; however, neither Millburn nor the author (if not Millburn) guarantees the accuracy, adequacy or completeness of such information. Nothing contained herein constitutes investment, legal, tax or other advice nor is it to be relied on in making an investment or other decision.

This presentation should not be viewed as a current or past recommendation or a solicitation of an offer to buy or sell any securities or to adopt any investment strategy.

The information in this presentation may contain projections or other forward-looking statements regarding future events, targets, forecasts or expectations regarding the strategies described herein, and is only current as of the date indicated. There is no assurance that such events or targets will be achieved, and may be significantly different from that shown here. The information in this presentation, including statements concerning financial market trends, is based on current market conditions, which will fluctuate and may be superseded by subsequent market events or for other reasons. Performance of all cited indices is calculated on a total return basis with dividends reinvested. Neither Millburn (nor any author other than Millburn) assumes any duty to, nor undertakes to update forward looking statements.

The performance and other information is based on that which is available as of the date of this report. Any markets, models, leverage, portfolio weights and other data or statistics described change over time, but are accurate as of the date indicated herein.

This information is not an offer to sell any product or a solicitation of an offer to invest or open an account (an “Account”) and must be supplemented by a disclosure document when considering an investment. An Account may be opened only after receipt and review of a disclosure document and execution of certain agreements. An Account disclosure document contains important information concerning risk factors, conflicts of interest and other material aspects of an investment; this must be read carefully before any decision whether to invest is made.

Commodity interest accounts are illiquid, speculative, employ significant leverage, and involve a high degree of risk. Commodity interest products involve high fees. Please see the disclosure document for a detailed description of these and other "Risk Factors" and "Conflicts of Interest." There can be no assurance that an Account will achieve its objectives.

RISKS OF AN INVESTMENT IN EACH MILLBURN PROGRAM include but are not limited to the following: (i) The Program is speculative. Investors may lose all or a substantial portion of their investments in the Program. An investor in an Account that is not structured as a limited liability vehicle may lose more than the amount of its investment; (ii) The Program is leveraged. The Program will acquire positions with a face amount of as much as eight to ten times or more of its total equity. Leverage magnifies the impact of both profit and loss; (iii) The performance of the Program is expected to be volatile; (iv) Investors will sustain losses if the Program is unable to generate sufficient trading profits and interest income to offset its fees and expenses; (v) A lack of liquidity in the markets in which the Program trades could make it impossible for the Program to realize profits or limit losses; (vi) A substantial portion of the trades executed for the Program take place on non-US exchanges. No US regulatory authority or exchange has the power to compel the enforcement of the rules of a foreign board of trade or any applicable foreign laws.