Fooled By Randomness… One Indicator At A Time

The New York Times reminds us not to take today’s jobs report too seriously. Why? The standard glitch will likely infect the data: statistical noise. “Even when the economy is moving in a clear direction, the noise in month-to-month changes can be big enough to obscure any trend,” Neil Irwin and Kevin Quealy write on the paper’s Upshot blog. To drive home the point, the article includes a simulation of how short-term fluctuations could play havoc with our ability to interpret the data point du jour. What the article didn’t mention is that this caveat applies to every economic indicator.

The degree of short-term noise varies quite a lot from one time series to another—there’s a lot more of it in weekly jobless claims vs. industrial production, for instance. But rest assured that looking at how the numbers stack up from month to month (or week to week, or quarter to quarter) is sure to disappoint us in our quest for clarity. There’s the added complication of our innate biases. “Human beings, unfortunately, are bad at perceiving randomness,” write Irwin and Quealy–a salient point that’s documented across countless studies through the decades.

The solution, such as it is, begins with two basic adjustments in the art/science of analyzing the state of the business cycle. First, favor year-over-year comparisons. This isn’t perfect either, but annual trends take out a lot of the noise that muddies, say, monthly changes. We can also minimize the noise by analyzing a carefully chosen set of economic and financial indicators and looking at the aggregated data. In fact, this is the idea behind the economic profiles that I publish each month–profiles that are based on a model that I discuss in some detail in Nowcasting The Business Cycle.

The drawback is that relatively reliable warning signs of macro trouble will arrive with a lag. If we’re looking for convincing evidence that there’s a new recession on our doorstep, the danger isn’t going to be obvious and compelling until after a slump has started. Indeed, analysis that dispenses both of these attributes, at the same time, is devilishly difficult on  a timely basis. It’s relatively easy to obtain one or the other, but finding both at once, while there’s still time to react in a defensive manner, is something else entirely.

It’s tempting to think that we can cut corners here and use the latest economic number to tell us all that we need to know. But history is quite harsh with such notions. High-quality analytics on the matter of recession risk can’t be rushed. Instead, we should focus on minimizing the lag time between the publication of new data and maximizing the signal-to-noise ratio. The foundation for this type of analysis is captured in the Economic Trend & Momentum indices that are updated on these pages (here’s last month’s edition). But these metrics are only the beginning in the mission to improve the analytics by seeing recession risk sooner without materially reducing the quality of the signal. It’s a major research project, but it’s critical for making progress in the all-important job of reducing the economy’s capacity for surprising us on the downside.

Based on my analysis, it’s reasonable to assume that we’ll see strong evidence of a new recession about two to three months after it’s started (see Chapter 12 in Nowcasting The Business Cycle for the analysis). That’s assuming that we’re looking at a well-designed yardstick.

Most of the lag is due to the fact that it takes time to calculate the numbers. Even so, two to three months after the fact doesn’t sound all that valuable, but that passes as an early warning in comparison to when most folks recognize when the writing’s on the wall. There are a number of techniques that hold out the possibility of squeezing the lag period a bit more, but it’s not easy. One idea is to generate near-term forecasts for all the indicators in the model and aggregate the predictions in a way to minimize the error. The record with this approach is encouraging, as I discuss each month in the economic profile (and several times throughout each month on a consulting basis). But you have to be looking, frequently, and across a diversified set of indicators.

The crowd, of course, loves a good story and so the case for building a robust model for evaluating the macro trend will fall on deaf ears for the most part. But that doesn’t reduce the considerable risk that accompanies the habit of cherry picking the numbers. “It’s worth remembering that no one report can neatly summarize the health of a $17 trillion economy of 300 million people, certainly not in something close to real time,” Irwin and Quealy conclude. They’re preaching to the choir as far as The Capital Spectator is concerned, but no one should confuse their advice as widely accepted in the wider world of economic analysis.

2 thoughts on “Fooled By Randomness… One Indicator At A Time

  1. Pingback: Fooled by Randomness … One Indicator at a Time – The Capital Spectator | Marty Investor

  2. Pingback: Daily Wrap for 5/2/2014 | The Whole Street

Comments are closed.