There are no silver bullets for profiling risk, but drawdown’s properties arguably give this metric a leg up over most of the competition. The combination of an intuitive framework, simplicity, and sharp focus on how markets actually behave is a tough act to beat.
Perhaps the strongest argument in favor of drawdown can be summed up by recognizing that peak-to-trough declines always resonate with investors. Sharpe ratio, Sortino ratio and the like are too abstract for most folks, but no one’s eyes will glaze over when you’re discussing losses relative to previous peaks.
Drawdown doesn’t replace other risk measures, but any risk analysis that excludes this metric may be overlooking crucial insight. With that in mind, the first order of business is developing robust estimates for assets and investment strategies to answer the question: What should you anticipate for maximum drawdown (MDD)?
The obvious way to begin is looking at the historical record. As an example, let’s review the history of the US stock market by way of the SPDR S&P 500 ETF (SPY), which tracks the S&P 500 Index. The ETF’s MDD since its 1993 launch is a hefty 55%. As such, it’s reasonable to assume that this fund is subject to getting cut in half in the depths of a bear market.
It’s tempting to leave it there, but SPY’s relatively short record inspires more stress testing to consider what might be lurking down the road. There are many possibilities on this front, but let’s start with a simple round of resampling the actual data for additional insight. The technique here is to create new sequences of daily returns from the actual track record to simulate alternative histories that might have occurred.
Let’s fire up R to generate 10,000 resampled variations, with the results shown in the chart below. As you can see the range of MDDs varies widely, from a relatively light 31% loss up to a dramatic 97% crash. Most of the MDD estimates, however, are within 62% to 77% (based on the interquartile range of the data). The median MDD for this simulation is 69%, or moderately deeper than the 55% decline posted in the actual record since 1993. In other words, there’s a case for thinking that SPY’s MDD could be worse in the future, perhaps much worse.
For another perspective, let’s assume a normal distribution and create 10,000 synthetic runs of 30-year returns. This result also suggests that SPY’s MDD could exceed the historical record. In this case, the median MDD is also 69%, although the worst losses go even higher. The clustering in the upper ranges implies that bigger MDDs than seen in the historical record are more likely than the resampled analysis suggests.
Finally, let’s kick the tires by simulating MDD on the assumption that SPY’s returns will exhibit fatter tails than a normal distribution allows. The workhorse for this test is the Student’s t distribution. As you can see below, the MDD simulations are skewed to the right, which is to say that the probability under this assumption is greater for expecting losses exceeding the historical record vs. modeling expectations with a normal distribution. Indeed, the median MDD for the Student t’s estimate is 79%, or 10 percentage points deeper than the estimate MDD via the normal distribution. Note too that the losses in the Student t’s modeling are conspicuously biased to the right, offering another hint that SPY’s future MDD could be surprisingly deep relative to what we’ve seen in the past.
The results above may not be terribly surprising if you’ve spent time studying the US stock market’s history. The real value of modeling expected MDD comes by focusing on customized portfolios that use a mix of asset classes, securities, and a specific risk-management process (such as tactical asset allocation or volatility management) to generate a targeted outcome through time. In such cases, it may not be obvious what to expect without stress testing various scenarios. This is where modeling efforts can provide genuine insight, perhaps by revealing hidden flaws or limitations in portfolio-management techniques that aren’t otherwise obvious.
Proprietary strategies, in short, harbor bigger mysteries than the widely analyzed history of, say, the S&P 500. The good news is that you can minimize some of uncertainty for risk expectations in customized portfolios with some basic modeling techniques.
Crunching the numbers on MDD is a good first step, although it’s wise to look through the prism of other risk metrics for a deeper profile. In some cases you may decide that a given strategy requires a round of adjustment. Then again, maybe all’s well.
Deciding which conclusion applies starts by running the numbers through a quantitative lens. The analysis may not reveal anything dramatic, but you’ll never know if you don’t spend the time looking.
Could you share the R code (pseudo-code) please?
Pingback: Profiling Risk - TradingGods.net
Pingback: Quantocracy's Daily Wrap for 09/06/2017 | Quantocracy
Thanks for the article. What kind of Degrees of Freedom did you use for the t-distribution?
Rezart,
DF=5. This is just a toy example so I used what some might label as the default setting.
–JP
Rama,
It’s all relatively straightforward. I’m using R and so an example of how I generated the resampled data would be as follows:
ret = daily percentage change in asset
ret.sample <-sample(ret, replace=TRUE) max.dd <-maxDrawdown(ret.sample) # via the PerformanceAnalytics package Put the two commands above into a function and generate x number of variations with replicate() A similiar procedure can applied for generating random results for normal and Student's-t distributions via rnorm() and rt(). --JP