The Case For Using Random Benchmarks In Portfolio Analysis

Benchmarks are indispensable for investment analytics. The challenge is picking a relevant one. The stakes are high because the wrong benchmark can be worse than none at all. The good news is that the potential for error can be dramatically reduced by choosing a set of random benchmarks that are generated from a portfolio’s holdings.

As an example, consider a money manager with a mandate to beat the S&P 500 with an active strategy that targets stocks in the index. The standard approach is to compare results against the S&P 500. But there’s a stronger choice: generating thousands of random portfolios drawn from the S&P names.

Even when there’s a clear choice for benchmarking, comparing a strategy in the context of thousands of random but related portfolios offers a superior foundation for risk and return analytics. In the case of an active S&P strategy, the procedure is to create thousands of alternative portfolios by randomly varying the weights and rebalancing schedule.

Using randomly generated portfolios as a benchmark is an idea that’s been around for a while. Patrick Burns of Burns Statistics wrote a research primer on the topic in 2004 (“Performance Measurement via Random Portfolios”), noting that randomly generated benchmarks have advantages over the conventional approach. “The measurement of skill with random portfolios avoids some of the noise that is introduced when performance is measured relative to a benchmark,” he noted.

Another advocate of random portfolios is Ron Surz of PPCA Inc. Writing in the Spring 2007 issue of The Journal of Performance Measurement (“Accurate Benchmarking is Gone But Not Forgotten: The Imperative Need to Get Back to Basics”), he outlined the case for using Monte Carlo simulations to sidestep the limitations of traditional benchmarking and peer-group analysis.

As an example of how random portfolios can enhance the benchmarking process and provide a richer dataset for analyzing risk and return, let’s run the numbers in R on a simple 60%/40% US stock/bond portfolio. The assumption in this toy example is that rebalancing back to a 60/40 mix at the end of each calendar year will outperform a comparable 60/40 strategy that’s not rebalanced, which will serve as a conventional benchmark. (As proxies for the assets we’ll use two ETFs: SPDR S&P 500 (SPY) and iShares Core US Aggregate Bond (AGG)).

History supports our hunch that rebalancing earns a premium, based on the sample period for Dec. 31, 2003 through yesterday (Mar. 28, 2017). The rebalanced portfolio earned a 6.8% annualized total return, moderately above the 6.2% for the unrebalanced benchmark.

Adjusting for risk (volatility) shows that the benchmark has a slight edge via a higher Sharpe ratio (0.68 vs. 0.64), but the comparison is close enough to dismiss the idea that the rebalanced strategy’s higher return comes with substantially higher risk. In other words, it appears that rebalancing strategy earned a premium with a comparable level of risk vs. the benchmark.

For a deeper dive into how the rebalanced strategy stacks up let’s create 1,000 random portfolio using the historical data for the two ETF with two constraints. First, the stock/bond mix is allowed to fluctuate between 10%/90% up to 90%/10%. The second constraint: the portfolios will be rebalanced 14 times over the sample period. Why 14? That’s the number of years in the sample period. Note, however, that the rebalancing dates will vary randomly.

Let’s review the results in a density plot that shows the distribution of annualized returns for the random portfolios (black line in chart below). The performances range from an annualized 3.6% to 9.4%, although most of the results are clustered between roughly 5.5% and 7.5%.

The median return for the random portfolios is 6.3% (blue line), which is below the rebalanced 60/40 strategy’s 6.8% (red line), which is at the 72nd percentile relative to the range of random portfolio returns. Note, too, that the unrebalanced portfolio’s performance (green line) is slightly below the median return of the random portfolios.

Next, let’s review the Sharpe ratios. On this front the rebalanced 60/40 strategy’s risk-adjusted performance (0.64) is below the median of the random portfolios (0.65) and the unrebalanced strategy (0.68). But the Sharpe ratios are close enough to conclude that risk levels are roughly equivalent.

It seems that the year-end rebalancing strategy is adding value. That may have been obvious from the start, but that’s not always the case. Real-world portfolios are more complicated and so there’s a stronger case for using random portfolios as benchmarks. By comparison, looking at a conventional benchmark, which is the equivalent of one historical sample, is far less robust and perhaps misleading. This may not be obvious in the 60/40 example above, but the advantages of random-portfolio benchmarking are clear for strategies targeting multiple asset classes and/or holding dozens or even hundreds of securities.

Using random portfolios as benchmarks is not only statistically superior vs. the standard approach, it’s easier from an analytical perspective. Imagine that you’re running a portfolio with a dozen ETFs representing a spectrum of asset classes. What benchmark do you use? The answer isn’t obvious if you’re searching for one index. The good news is that generating thousands of random portfolios based on the actual holdings, with customized parameters that are relevant for the strategy, offers a compelling solution.

This is no trivial point. As Surz reminds, ” If the benchmark is wrong, all of the analytics are wrong.” Fortunately, there’s an easy fix: use the right benchmark,” aka a carefully designed set of random portfolio.

4 thoughts on “The Case For Using Random Benchmarks In Portfolio Analysis

  1. Pingback: Benchmarks Are Important for Investment Analytics - TradingGods.net

  2. Pingback: Quantocracy's Daily Wrap for 03/29/2017 | Quantocracy

  3. James Picerno Post author

    ujwal,
    The code to produce the graphs is a bit messy and so I’m not prepared to post it at this time. I prefer to present “clean” code files when I go public, but this isn’t quite ready for prime time. However, you can find the basics for replicating the data in previously published R code I wrote here:

    https://gist.github.com/jpicerno1/fbc2e589023be56dde42
    https://gist.github.com/jpicerno1/af88861bcbbb80687cfb

    For perspective, here are the related articles on The Capital Spectator:

    http://www.capitalspectator.com/skewed-by-randomness-testing-arbitrary-rebalancing-dates/#more-6039
    http://www.capitalspectator.com/using-random-portfolios-to-test-asset-allocation-strategies/

    –JP

Comments are closed.