Somewhat to my surprise, last month I got a great response to my post, "My Must-Read List" (HT's to Mark Thoma & Tyler Cowen). This past week I learned a lot by reading some terrific new papers on a variety of econometrics topics. Here they are, with some commentary, and in no particular order:
Specification Tests with Weak and Invalid Instruments (Doko Tchatoka & Firmin Sabro)
Abstract:
The D-W-H tests for exogeneity are standard fare when we want to know, should we use OLS or I.V. estimation. They're asymptotic tests, so their performance in small samples is always an issue. Add to that the possibility that we have "weak" instruments, and there's a lot to worry about. This paper gives us some practical guidance on how to proceed.
Continuous-Time Linear Models (John H. Cochrane)
Abstract:
I was educated in New Zealand, and continuous-time econometrics was developed largely by New Zealanders - Bill Phillips (of Phillips curve fame), Rex Bergstrom, Peter Phillips, and Cliff Wymer. It's nice to see that this important, though technically difficult, research is still alive and well!
Estimation Adjusted VaR (G. Gourieroux & J. M. Zakoian)
Abstract:
Why I Liked it:
This work ties in with the work on bias-adjustment of the MLEs of the parameters in the Generalized Pareto distribution that I've been doing with Helen Feng and Ryan Godwin. Bias-correcting these parameter estimates is an indirect way of bias-adjusting estimates of VaR and ES, as we illustrate with the empirical examples in our paper.
A Simple Method to Visualize Results in Nonlinear Regression Models (Daniel J. Henderson, Subal C, Kumbhaker, & Christopher S. Parmeter)
Abstract:
Why I Liked it:
Data visualization is an important topic, and this paper provides a neat new type of visualization that helps us to understand, and interpret, the results associated with nonlinear regression models.
Evaluating Macroeconomic Forecasts: A Concise Review of Some Recent Developments (Philip Hans Franses & Michael McAleer)
Abstract:
Notwithstanding the very long abstract (!) - this paper takes explicit account of the fact that none of the forecasts produced by the modellers in central banks, treasuries, and the like are really "pure forecasts" at all. The forecasts from the macroeconometric models that they use have always been "massaged" to some degree. A simple example is the "fine-tuning" that they undertake when they adjust the estimated intercepts in regression equations to bring the within-sample forecasts back to the actual data values in the last period before the true ex ante forecasting begins. I;m not saying this shouldn't be done, just that it has implications for forecast properties, and this is ignored all too often.
On Confidence Intervals for Autregressive Unit Roots and Predictive Regression (Peter C. B. Phillips)
Abstract:
Specification Tests with Weak and Invalid Instruments (Doko Tchatoka & Firmin Sabro)
Abstract:
"We investigate the size of the Durbin-Wu-Hausman tests for exogeneity when instrumental variables violate the strict exogeneity assumption. We show that these tests are severely size distorted even for a small correlation between the structural error and instruments. We then propose a bootstrap procedure for correcting their size. The proposed bootstrap procedure does not require identification assumptions and is also valid even for moderate correlations between the structural error and instruments, so it can be described as robust to both weak and invalid instruments."Why I Liked it:
The D-W-H tests for exogeneity are standard fare when we want to know, should we use OLS or I.V. estimation. They're asymptotic tests, so their performance in small samples is always an issue. Add to that the possibility that we have "weak" instruments, and there's a lot to worry about. This paper gives us some practical guidance on how to proceed.
Continuous-Time Linear Models (John H. Cochrane)
Abstract:
"I translate familiar concepts of discrete-time time-series to contnuous-time equivalent. I cover lag operators, ARMA models, the relation between levels and differences, integration and cointegration, and the Hansen-Sargent prediction formulas."Why I Liked it:
I was educated in New Zealand, and continuous-time econometrics was developed largely by New Zealanders - Bill Phillips (of Phillips curve fame), Rex Bergstrom, Peter Phillips, and Cliff Wymer. It's nice to see that this important, though technically difficult, research is still alive and well!
Estimation Adjusted VaR (G. Gourieroux & J. M. Zakoian)
Abstract:
"Standard risk measures, such as the Value-at-Risk (VaR), or the Expected Shortfall, have to be estimated and their estimated counterparts are subject to estimation uncertainty. Replacing, in the theoretical formulas, the true parameter value by an estimator based on n observations of the Profit and Loss variable, induces an asymptotic bias of order 1/n in the coverage probabilities. This paper shows how to correct for this bias by introducing a new estimator of the VaR, called Estimation adjusted VaR (EVaR). This adjustment allows for a joint treatment of theoretical and estimation risks, taking into account for their possible dependence. The estimator is derived for a general parametric dynamic model and is particularized to stochastic drift and volatility models. The finite sample properties of the EVaR estimator are studied by simulation and an empirical study of the S&P Index is proposed."
Why I Liked it:
This work ties in with the work on bias-adjustment of the MLEs of the parameters in the Generalized Pareto distribution that I've been doing with Helen Feng and Ryan Godwin. Bias-correcting these parameter estimates is an indirect way of bias-adjusting estimates of VaR and ES, as we illustrate with the empirical examples in our paper.
A Simple Method to Visualize Results in Nonlinear Regression Models (Daniel J. Henderson, Subal C, Kumbhaker, & Christopher S. Parmeter)
Abstract:
"A simple graphical approach to presenting results from nonlinear regression models is described. In the face of multiple covariates, 'partial mean' plots may be unattractive. The approach here is portable to a variety of settings and can be tailored to the specific application at hand. A simple four variable nonparametric regression example is provided to illustrate the technique."(This paper is "forthcoming" in Economics Letters.)
Why I Liked it:
Data visualization is an important topic, and this paper provides a neat new type of visualization that helps us to understand, and interpret, the results associated with nonlinear regression models.
Evaluating Macroeconomic Forecasts: A Concise Review of Some Recent Developments (Philip Hans Franses & Michael McAleer)
Abstract:
"Macroeconomic forecasts are frequently produced, widely published, intensively discussed and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC."Why I Liked it:
Notwithstanding the very long abstract (!) - this paper takes explicit account of the fact that none of the forecasts produced by the modellers in central banks, treasuries, and the like are really "pure forecasts" at all. The forecasts from the macroeconometric models that they use have always been "massaged" to some degree. A simple example is the "fine-tuning" that they undertake when they adjust the estimated intercepts in regression equations to bring the within-sample forecasts back to the actual data values in the last period before the true ex ante forecasting begins. I;m not saying this shouldn't be done, just that it has implications for forecast properties, and this is ignored all too often.
On Confidence Intervals for Autregressive Unit Roots and Predictive Regression (Peter C. B. Phillips)
Abstract:
"A prominent use of local to unity limit theory in applied work is the construction of confidence intervals for autogressive roots through inversion of the ADF t statistic associated with a unit root test, as suggested in Stock (1991). Such confidence intervals are valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n) but are invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Consideration of the boundary case shows that these confidence intervals are invalid for stationary autoregression where they manifest locational bias and width distortion. In particular, the coverage probability of these intervals tends to zero as c → -∞ and the width of the intervals exceeds the width of intervals constructed in the usual way under stationarity. Some implications of these results for predictive regression tests are explored. It is shown that when the regressor has autoregressive coefficient |ρ| < 1 and the sample size n → ∞, the Campbell and Yogo (2006) confidence intervals for the regression coefficient have zero coverage probability asymptotically and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious implications for empirical practice."
Why I Liked it:
We all know that we have to be very careful when our time-series data are non-stationary. Here's another example of this. Moreover, we're shown that a recent suggestion by Campbell and Yogo has to treated with caution. Time-series modellers need to keep on top of this.
Stein-Rule Estimation and Generalized Shrinkage Methods for Forecasting Using Many Predictors (Eric Hillebrand & Tae-Hwy Lee)
Abstract:
I'm very much into "model averaging" papers at the moment. This one falls into that category, and also intersects to some degree with my earlier work on Bayesian econometrics, shrinkage estimators, and pre-testing. We're going to se a lot more written on model averaging.
We all know that we have to be very careful when our time-series data are non-stationary. Here's another example of this. Moreover, we're shown that a recent suggestion by Campbell and Yogo has to treated with caution. Time-series modellers need to keep on top of this.
Stein-Rule Estimation and Generalized Shrinkage Methods for Forecasting Using Many Predictors (Eric Hillebrand & Tae-Hwy Lee)
Abstract:
"We examine the Stein-rule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Stein-rule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased OLS estimator towards a restricted biased principal component (PC) estimator. Since the Stein-rule estimator combines the OLS and PC estimators, it is a model-averaging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree of the Stein-rule shrinkage. We conduct Monte Carlo simulations to examine these parameter regions. The overall picture that emerges is that the Stein-rule shrinkage estimator can dominate both OLS and principal components estimators within an intermediate range of the signal-to-noise ratio. If the signal-to-noise ratio is low, the PC estimator is superior. If the signal-to-noise ratio is high, the OLS estimator is superior. In out-of-sample forecasting with AR(1) predictors, the Stein-rule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence."Why I Liked it:
I'm very much into "model averaging" papers at the moment. This one falls into that category, and also intersects to some degree with my earlier work on Bayesian econometrics, shrinkage estimators, and pre-testing. We're going to se a lot more written on model averaging.
© 2012, David E. Giles
Dave, thank you for taking the time to alert us to your valuable picks. Always very helpful.
ReplyDelete