Credit Spreads, Daily Business Cycle, and Corporate Bond Returns Predictability
Alexey Ivashchenko (University of Lausanne)
May 4, 2017
The part of credit spread that is not explained by corporate credit risk forecasts future economic activity. I show that the link with aggregate business risk and bond liquidity risk explains this finding. Once I project spreads on these two risk factors, which are readily measurable with the daily frequency, in addition to corporate credit risk, the forecasting power of the residual spread reduces substantially for some macro variables and disappears entirely for the others. Such residual, however, turns out to be an out-of-sample forecast of corporate bond market returns. An investment strategy based on such forecasts delivers risk-adjusted returns 50% higher than the corporate bond market.
Understanding Survey Based Inflation Expectations
Travis J. Berge (Board of Governors of the Federal Reserve System)
April 2017
Survey based measures of inflation expectations are not informationally efficient yet carry important information about future inflation. This paper explores the economic significance of informational inefficiencies of survey expectations. A model selection algorithm is applied to the inflation expectations of households and professionals using a large panel of macroeconomic data. The expectations of professionals are best described by different indicators than the expectations of households. A forecast experiment finds that it is difficult to exploit informational inefficiencies to improve inflation forecasts, suggesting that the economic cost of the surveys’ deviation from rationality is not large.
Rethinking Probabilistic Prediction in the Wake of the 2016 U.S. Presidential Election
Harry Crane (Rutgers) and Ryan Martin (N. Carolina State University)
April 15, 2017
To many statisticians and citizens, the outcome of the most recent U.S. presidential election represents a failure of data-driven methods on the grandest scale. This impression has led to much debate and discussion about how the election predictions went awry — Were the polls inaccurate? Were the models wrong? Did we misinterpret the probabilities? — and how they went right — Perhaps the analyses were correct even though the predictions were wrong, that’s just the nature of probabilistic forecasting. With this in mind, we analyze the election outcome with respect to a core set of effectiveness principles. Regardless of whether and how the election predictions were right or wrong, we argue that they were ineffective in conveying the extent to which the data was informative of the outcome and the level of uncertainty in making these assessments. Among other things, our analysis sheds light on the shortcomings of the classical interpretations of probability and its communication to consumers in the form of predictions. We present here an alternative approach, based on a notion of validity, which offers two immediate insights for predictive inference. First, the predictions are more conservative, arguably more realistic, and come with certain guarantees on the probability of an erroneous prediction. Second, our approach easily and naturally reflects the (possibly substantial) uncertainty about the model by outputting plausibilities instead of probabilities. Had these simple steps been taken by the popular prediction outlets, the election outcome may not have been so shocking.
High Frequency vs. Daily Resolution: The Economic Value of Forecasting Volatility Models
Francesca Lilla (University of Bologna)
April 9, 2017
Forecasting volatility models typically rely on either daily or high frequency (HF) data and the choice between these two categories is not obvious. In particular, the latter allows to treat volatility as observable but they suffer from many limitations. HF data feature microstructure problem, such as the discreteness of the data, the properties of the trading mechanism and the existence of bid-ask spread. Moreover, these data are not always available and, even if they are, the asset’s liquidity may be not sufficient to allow for frequent transactions. This paper considers different variants of these two family forecasting-volatility models, comparing their performance (in terms of Value at Risk, VaR) under the assumptions of jumps in prices and leverage effects for volatility. Findings suggest that daily-data models are preferred to HF-data models at 5% and 1% VaR level. Specifically, independently from the data frequency, allowing for jumps in price (or providing fat-tails) and leverage effects translates in more accurate VaR measure.
Forecasting the Price of Crude Oil with Multiple Predictors
Hüseyin Kaya (İstanbul Medeniyet University)
June 2016
For the price of crude oil, this paper aims to investigate the predictive content of a variety of variables including oil futures prices, exchange rates of particular countries and stock-market indexes. Out-of-sample forecasting results suggest that oil futures prices have marginal predictive power for the price of oil at a 1-month forecast horizon. However, they generally lose their forecasting power at higher forecast horizons. The results also suggest that exchange rates help predicting oil prices at higher forecast horizons. The paper also considers forecast averaging and variable selection methods, and fınds that forecast averaging significantly improves the forecasting performances.
Conditionally Optimal Weights and Forward-Looking Approaches to Combining Forecasts
Christopher G. Gibbs (UNSW Business School) and A. Vasnev (U. of Sydney)
February 17, 2017
In applied forecasting, there is a trade-off between in-sample fit and out-of-sample forecast accuracy. Parsimonious model specifications typically outperform richer model specifications. Consequently, there is often predictable information in forecast errors that is difficult to exploit. However, we show how this predictable information can be exploited in forecast combinations. In this case, optimal combination weights should minimize conditional mean squared error, or a conditional loss function, rather than the unconditional variance as in the commonly used framework of Bates and Granger (1969). We prove that our conditionally optimal weights lead to better forecast performance. The conditionally optimal weights support other forward-looking approaches to combining forecasts, where the forecast weights depend on the expected model performance. We show that forward-looking approaches can robustly outperform the random walk benchmark and many of the commonly used forecast combination strategies, including equal weights, in real-time out-of-sample forecasting exercises of inflation.
Pingback: Quantocracy's Daily Wrap for 05/05/2017 | Quantocracy