Showing posts with label Autocorrelation. Show all posts
Showing posts with label Autocorrelation. Show all posts

Monday, July 1, 2019

July Reading

This month my reading list is a bit different from the usual one. I've taken a look back at past issues of Econometrica and Journal of Econometrics, and selected some important and interesting papers that happened to be published in July issues of those journals.

Here's what I came up with for you:
  • Aigner, D., C. A. K. Lovell, & P. Schmidt, 1977. Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6, 21-37.
  • Chow, G. C., 1960. Tests of equality between sets of coefficients in two linear regressions. Econometrica, 28, 591-605.
  • Davidson, R. & J. G. MacKinnon, 1984. Convenient specification tests for logit and probit models. Journal of Econometrics, 25, 241-262.
  • Dickey, D. A. & W. A. Fuller, 1981. Likelihood ratio statistics for autoregressive time series with a unit root. Econometrica, 49, 1057-1072.
  • Granger, C. W. J. & P. Newbold,  1974. Spurious regressions in econometrics. Journal of Econometrics, 2, 111-120.
  • Sargan, J. D., 1961. The maximum likelihood estimation of economic relationships with autoregressive residuals. Econometrica, 29, 414-426. 
© 2019, David E. Giles

Wednesday, May 1, 2019

May Reading List

Here's a selection of suggested reading for this month:
  • Athey, S. & G. W. Imbens, 2019. Machine learning methods economists should know about. Mimeo.
  • Bhagwat, P. & E. Marchand, 2019. On a proper Bayes but inadmissible estimator. American Statistician, online.
  • Canals, C. & A. Canals, 2019. When is n large enough? Looking for the right sample size to estimate proportions. Journal of Statistical Computation and Simulation, 89, 1887-1898.
  • Cavaliere, G. & A. Rahbek, 2019. A primer on bootstrap testing of hypotheses in time series models: With an application to double autoregressive models. Discussion Paper 19-03, Department of Economics, University of Copenhagen.
  • Chudik, A. & G. Geogiardis, 2019. Estimation of impulse response functions when shocks are observed at a higher frequency than outcome variables. Globalization Institute Working Paper 356, Federal Reserve Bank of Dallas.
  • Reschenhofer, E., 2019. Heteroscedasticity-robust estimation of autocorrelation. Communications in Statistics - Simulation and Computation, 48, 1251-1263.
© 2019, David E. Giles

Sunday, December 2, 2018

December Reading for Econometricians

My suggestions for papers to read during December:


© 2018, David E. Giles

Sunday, February 11, 2018

Recommended Reading for February

Here are some reading suggestions:
  • Bruns, S. B., Z. Csereklyei, & D. I. Stern, 2018. A multicointegration model of global climate change. Discussion Paper No. 336, Center for European, Governance and Economic Development Research, University of Goettingen.
  • Catania, L. & S. Grassi, 2017. Modelling crypto-currencies financial time-series. CEIS Tor Vegata, Research Paper Series, Vol. 15, Issue 8, No. 417.
  • Farbmacher, H., R. Guber, & J. Vikström, 2018. Increasing the credibility of the twin birth instrument. Journal of Applied Econometrics, online.
  • Liao, J. G. & A. Berg, 2018. Sharpening Jensen's inequality. American Statistician, online.
  • Reschenhofer, E., 2018. Heteroscedasticity-robust estimation of autocorrelation. Communications in Statistics - Simulation and Computation, online.

© 2018, David E. Giles

Friday, May 5, 2017

Here's What I've Been Reading

Here are some of the papers that I've been reading recently. Some of them may appeal to you, too:
© 2017, David E. Giles

Saturday, December 3, 2016

December Reading List

Goodness me! November went by really quickly!
 
© 2016, David E. Giles

Thursday, November 3, 2016

T. W. Anderson: 1918-2016

Unfortunately, this post deals with the recent loss of one of the great statisticians of our time - Theodore (Ted) W. Anderson.

Ted passed away on 17 September of this year, at the age of 98.

I'm hardly qualified to discuss the numerous, path-breaking, contributions that Ted made as a statistician. You can read about those in De Groot (1986), for example.

However, it would be remiss of me not to devote some space to reminding readers of this blog about the seminal contributions that Ted Anderson made to the development of econometrics as a discipline. In one of the "ET Interviews", Peter Phillips talks with Ted about his career, his research, and his role in the history of econometrics.  I commend that interview to you for a much more complete discussion than I can provide here.

(See this post for information about other ET Interviews).

Ted's path-breaking work on the estimation of simultaneous equations models, under the auspices of the Cowles Commission, was enough in itself to put him in the Econometrics Hall of Fame. He gave us the LIML estimator, and the Anderson and Rubin (1949, 1950) papers are classics of the highest order. It's been interesting to see those authors' test for over-identification being "resurrected" recently by a new generation of econometricians. 

There are all sorts of other "snippets" that one can point to as instances where Ted Anderson left his mark on the history and development of econometrics.

For instance, have you ever wondered why we have so many different tests for serial independence of regrsssion errors? Why don't we just use the uniformly most powerful (UMP) test and be done with it? Well, the reason is that no such test (against the alternative of a first-oder autoregresive pricess) exists.

That was established by Anderson (1948), and it led directly to the efforts of Durbin and Watson to develop an "approximately UMP test" for this problem.

As another example, consider the "General-to-Specific" testing methodology that we associate with David Hendry, Grayham Mizon, and other members of the (former?) LSE school of thought in econometrics. Why should we "test down", and not "test up" when developing our models? In other words, why should we start with the most  general form of the model, and then successively test and impose restrictions on the model, rather than starting with a simple model and making it increasingly complex? The short answer is that if we take the former approach, and "nest" the successive null and alternative hypotheses in the appropriate manner, then we can appeal to a theorem of Basu to ensure that the successive test statistics are independent. In turn, this means that we can control the overall significance level for the set of tests to what we want it to be. In contrast, this isn't possible if we use a "Simple-to-General" testing strategy.

All of this spelled out in Anderson (1962) in the context of polynomial regression, and is discussed further in Ted's classic time-series book (Anderson, 1971). The LSE school referred to this in promoting the "General-to-Specific" methodology.

Ted Anderson published many path-breaking papers in statistics and econometrics and he wrote several books - arguably, the two most important are Anderson (1958, 1971). He was a towering figure in the history of econometrics, and with his passing we have lost one of our founding fathers.

References

Anderson, T.W., 1948. On the theory of testing serial correlation. Skandinavisk Aktuarietidskrift, 31, 88-116.

Anderson, T.W., 1958. An Introduction to Multivariate Statistical Analysis. WIley, New York (2nd. ed. 1984).

Anderson, T.W., 1962. The choice of the degree of a polynomial regression as a multiple decision problem. Annals of Mathematical Statistics, 33, 255-265.

Anderson, T.W., 1971. The Statistical Analysis of Time Series. Wiley, New York.

Anderson, T.W. & H. Rubin, 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 20, 46-63.

Anderson, T.W. & H. Rubin, 1950. The asymptotic properties of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 21,570-582.

De Groot, M.H., 1986. A Conversation with T.W. Anderson: An interview with Morris De Groot. Statistical Science, 1, 97–105.

© 2016, David E. Giles

Tuesday, July 5, 2016

Recommended Reading for July

Now that the Canada Day and Independence Day celebrations are behind (some of) us, it's time for some serious reading at the cottage. Here are some suggestions for you:


© 2016, David E. Giles

Friday, May 6, 2016

May Reading List

Here's my reading list for May:
  • Hayakawa, K., 2016. Unit root tests for short panels with serially correlated errors. Communications in Statistics - Theory and Methods, in press.
  • Hendry, D. F. and G. E. Mizon, 2016. Improving the teaching of econometrics. Discussion Paper 785, Department of Economics, University of Oxford.
  • Hoeting, J. A., D. Madigan, A. E. Raftery, and C. T. Volinsky, 1999. Bayesian model averaging: A tutorial (with comments and rejoinder). Statistical Science, 14, 382-417. 
  • Liu, J., D. J. Nordman, and W. Q. Meeker, 2016. The number of MCMC draws needed to compute Bayeian credible bounds. American Statistician, in press.
  • Lu, X., L. Su, and H. White, 2016. Granger causality and structural causality in cross-section and panel data. Working Paper No, 04-2016, School of Economics, Singapore Management University.
  • Nguimkeu, P., 2016.  An improved selection test between autoregressive and moving average disturbances in regression models. Journal of Time Series Econometrics, 8, 41-54.

© 2016, David E. Giles

Friday, October 2, 2015

Illustrating Spurious Regressions

I've talked a bit about spurious regressions a bit in some earlier posts (here and here). I was updating an example for my time-series course the other day, and I thought that some readers might find it useful.

Let's begin by reviewing what is usually meant when we talk about a "spurious regression".

In short, it arises when we have several non-stationary time-series variables, which are not cointegrated, and we regress one of these variables on the others.

In general, the result that we get are nonsensical, and the problem is only worsened if we increase the sample size. This phenomenon was observed by Granger and Newbold (1974), and others, and Phillips (1986) developed the asymptotic theory that he then used to prove that in a spurious regression the Durbin-Watson statistic converges in probability to zero; the OLS parameter estimators and R2 converge to non-standard limiting distributions; and the t-ratios and F-statistic diverge in distribution, as T ↑ ∞ .

Let's look at some of these results associated with spurious regressions. We'll do so by means of a simple simulation experiment.

Monday, June 29, 2015

The Econometrics of Temporal Aggregation - VI - Tests of Linear Restrictions

This post is one of several related posts. The previous ones can be found here, here, here, here and here. These posts are based on Giles (2014).

Many of the statistical tests that we perform routinely in econometrics can be affected by the level of aggregation of the data. Here, Let's focus on time-series data, and with temporal aggregation. I'm going to show you some preliminary results from work that I have in progress with Ryan Godwin. These results relate to one particular test, but work covers a variety of testing problems.

I'm not supplying the EViews program code that was used to obtain the results below - at least, not for the moment. That's because the results that I'm reporting are based on work in progress. Sorry!

As in the earlier posts, let's suppose that the aggregation is over "m" high-frequency periods. A lower case symbol will represent a high-frequency observation on a variable of interest; and an upper-case symbol will denote the aggregated series.

So,
               Yt = yt + yt - 1 + ......+ yt - m + 1 .

If we're aggregating monthly (flow) data to quarterly data, then m = 3. In the case of aggregation from quarterly to annual data, m = 4, etc.

Now, let's investigate how such aggregation affects the performance of standard tests of linear restrictions on the coefficients of an OLS regression model. The simplest example would be a t-test of the hypothesis that one of the coefficients is zero. Another example would be the F-test of the hypothesis that all of the "slope" coefficients in such a regression model are zero.

Consider the following simple Monte Carlo experiment, based on 20,000 replications.

Tuesday, May 12, 2015

Alternative Tests for Serial Independence

The following question arose in a (fairly) recent email from Daumantas:
"I wonder if you could give any references -- or perhaps make a new blog post -- about testing for serial correlation: Breusch-Godfrey versus Ljung-Box test. I have no problem finding material on the two tests (separately), but I am interested in a comparison of the two. Under what conditions should one test be favoured over the other? What pitfalls should one be aware of before choosing one or the other test? Or perhaps both of them should be put to rest in favour of some new, more general, more robust or more powerful test?"
Daumantas apparently raised the same question on stackexchange, and got some sensible responses.

If this interests you, the response there that refers to Chapter 2 of Fumio Hayashi's, Econometrics, is right on target. There's no point in me repeating it here.

Rob Hyndman also had an interesting and useful post about the L-B test.

My recommendation - stick with the Breusch-Godfrey test if you're testing regression residuals.


© 2015, David E. Giles

Friday, October 31, 2014

Recent Reading

From my "Recently Read" list:
  • Born, B. and J. Breitung, 2014. Testing for serial correlation in fixed-effects panel data models. Econometric Reviews, in press.
  • Enders, W. and Lee. J., 2011. A unit root test using a Fourier series to approximate smooth breaks, Oxford Bulletin of Economics and Statistics, 74, 574-599.
  • Götz, T. B. and A. W. Hecq, 2014. Testing for Granger causality in large mixed-frequency VARs. RM/14/028, Maastricht University, SBE, Department of Quantitative Economics.
  • Kass, R. E., 2011. Statistical inference: The big picture. Statistical Science, 26, 1-9.
  • Qian, J. and L. Su, 2014. Structural change estimation in time series regressions with endogenous variables. Economics Letters, in press.
  • Wickens, M., 2014. How did we get to where we are now? Reflections on 50 years of macroeconomic and financial econometrics. Discussion Paper No. 14/17, Department of Economics and Related Studies, University of York.
© 2014, David E. Giles

Friday, August 29, 2014

September Reading List

In North America, Labo(u)r Day weekend is upon us. The end of summer. Back to school. Last chance to get some pre-class reading done!

  • Blackburn, M. L., 2014. The relative performance of Poisson and negative binomial regression estimators. Oxford Bulletin of Economics and Statistics, in press.
  • Giannone, D., M. Lenza, and G. E. Primiceri, 2014. Prior selection for vector autoregressions. Review of Economics and Statistics, in press.
  • Gulesserian, S. G. and M. Kejriwal, 2014. On the power of bootstrap tests for stationarity: A Monte Carlo comparison. Empirical Economics, 46, 973-998.
  • Elliot, G. and A. Timmerman, 2008. Economic forecasting. Journal of Economic Literature, 46, 3-56.
  • Kiviet, J. F., 1986, On the rigour of some misspecification tests for modelling dynamic relationships. Review of Economic Studies, 53, 241-261.
  • Otto, G. D. and G. M. Voss, 2014. Flexible inflation forecast targeting: Evidence from Canada. Canadian Journal of Economics, 47, 398-421. 

© 2014, David E. Giles

Monday, April 21, 2014

More On the Limitations of the Jarque-Bera Test

Testing the validity of the assumption, that the errors in a regression model are normally distributed, is a standard pastime in econometrics. We use this assumption when we construct standard confidence intervals  for, or test hypotheses about, the parameters of our models. In a post some time ago I pointed out that this assumption is actually is sufficient, but not necessary, for the validity of these inferences.

More recently, here and here, I discussed some aspects of the normality test that most econometricians use - the asymptotically valid test of Jarque and Bera (1987). Let's refer to this as the JB test. In the first of those posts I made brief mention of the finite-sample properties of the JB test, and I concluded:
"However, more recent evidence suggests that the power of the J-B test can be quite low in small samples, for a number of important alternative hypotheses - e.g., see Thadewald and Buning (2004). I'll address this aspect of the J-B test more fully in a later post."
The main results obtained by Thadewald and Buning are summed up in the abstract to their paper .............

Friday, September 6, 2013

Some More Papers for Your "To Read" List


For better, or worse, here are some of the papers I've been reading lately:
  • Chambers, M. J., J. S. Ercolani, and A. M. R. Taylor, 2013. Testing for seasonal unit roots by frequency domain regression. Journal of Econometrics, in press. 
  • Chicu, M. and M. A. Masten, 2013. A specification test for discrete choice models. Economics Letters, in press. 
  • Hansen, P. R. and A. Lunde, 2013. Estimating the persistence and the autocorrelation function of a time series that is measured with error. Econometric Theory, in press.
  • Liu, Y., J. Liu, and F. Zhang, 2013. Bias analysis for misclassificaiton in a multicategorical exposure in a logistic regression model. Statistics and Probability Letters, in press.
  • Thornton, M., 2013, The aggregation of dynamic relationships caused by incomplete information. Journal of Econometrics, in press.
  • Wang, H. and S. Z. F. Zhou, 2013. Interval estimation by frequentist model averaging. Communications in Statistics - Theory and Methods, in press.   

© 2013, David E. Giles

Friday, August 2, 2013

Allocation Models With Autocorrelated Errors

Not too long ago, I had a couple of posts about "allocation models" (here and here). These models are systems of regression equations in which there is a constraint on the data for the dependent variables for the equations. Specifically, at every point in the sample, these variables sum exactly to the value of a linear combination of the regressors. In practice, this linear combination usually is very simple - it's just one of the regressors.

So, for example, suppose that the dependent variables measure the shares of Canada's exports that go to different countries. These shares must add up to one in value. If we have an intercept (a series of "ones") in each equation, then we have an allocation model.

In one of the comments on the earlier posts, I was asked about the possibility of autocorrelated errors in the empirical example that I provided. In my response, I noted that if autocorrelation is present, and is allowed for in the estimation of the model, then special care is needed. In particular, any modification to the model, to allow for a specific form of autocorrelation, must satisfy the "adding up" constraints that are fundamental to the allocation model.

Let's see what this involves, in practice.

Sunday, July 14, 2013

Vintage Years in Econometrics - The 1950's

Following on from my earlier posts about vintage years for econometrics in the 1930's and 1940's, here's my run-down on the 1950's.

As before, let me note that "in econometrics, what constitutes quality and importance is partly a matter of taste - just like wine! So, not all of you will agree with the choices I've made in the following compilation."

Wednesday, June 19, 2013

ARDL Models - Part II - Bounds Tests

[Note: For an important update of this post, relating to EViews 9, see my 2015 post, here.]

Well, I finally got it done! Some of these posts take more time to prepare than you might think.

The first part of this discussion was covered in a (sort of!) recent post, in which I gave a brief description of Autoregressive Distributed Lag (ARDL) models, together with some historical perspective. Now it's time for us to get down to business and see how these models have come to play a very important role recently in the modelling of non-stationary time-series data.

In particular, we'll see how they're used to implement the so-called "Bounds Tests", to see if long-run relationships are present when we have a group of time-series, some of which may be stationary, while others are not. A detailed worked example, using EViews, is included.

Sunday, June 16, 2013

Vintage Years in Econometrics: The 1940's

A Fathers' Day "thank you" to our founding fathers..........

Following on from my earlier post about vintage years for econometrics in the 1930's, here's my take on the 1940's. This is a more challenging decade to assess, given the explosion of major contributions in the second decade of life for the new discipline.

As before, let me note that "in econometrics, what constitutes quality and importance is partly a matter of taste - just like wine! So, not all of you will agree with the choices I've made in the following compilation."

I've added a few "tasting notes" here and there, if I thought they were warranted.