This blog has moved servers. Aside from aesthetic and minor functionality differences, everything should be as it was before. The transition caused some issues with the mathematical typesetting, a few bugs may remain.

Economics, econometrics, etc.

Bryant Chen and Judea Pearl have published a interesting piece in which they critically examine the discussions (or lack thereof) of causal interpretations of regression models in six econometrics textbooks. In this post, I provide brief assessments of the discussion of causality in nine additional econometrics texts of various levels and vintages, and close with a few remarks about causality in textbooks from the perspective of someone who does, and teaches, applied econometrics. Like Chen and Pearl, I find some of these textbooks provide weak or misleading discussion of causality, but I also find one very good and one excellent discussion in relatively recent texts. I argue that the discussion of causality in econometrics textbooks appears to be improving over time, and that the oral tradition in economics is not well-reflected in econometrics textbooks.

The Chen and Pearl paper has been around for a while in working paper form and recently came out in the Real World Economics Review, also available here from the authors with much clearer typesetting.

The additional textbooks I discuss below are: Amemiya (1985), Kmenta (1986), Davidson and MacKinnon (1993), Gujarati (1999), Hayashi (2000), Wooldridge (2002), Davidson and MacKinnon (2004), Deilman (2005), and Cameron and Trivedi (2005).

**Tags:** causality, econometrics, Judea Pearl, textbooks

In 2002, I wrote a small piece noting that Steve Keen’s novel criticism of economics in his book *Debunking Economics* is simply wrong (Debunking Debunking Economics). Part of that novel criticism is Keen’s claim that the standard analysis of the competitive model is *mathematically* wrong, and if one does the math correctly, one finds that the competitive equilibrium and the collusive outcome are the same. Which is an extraordinary claim! Everyone has been just doing the math wrong for well over a century, and if we were to do the math correctly we’d find that all industry structures actually behave as if the industry were monopolized, under *textbook* assumptions. Again, it’s important to emphasize this isn’t an appeal to some more complex model, or to empirical evidence, or criticism of some unrealistic assumption in the standard model: Keen’s claim is that this theoretical result follows from textbook assumptions if one merely does the math correctly.

Brian Milner is a “a senior economics writer and global markets columnist” at Canada’s largest and arguably most highly respected newspaper, the Globe and Mail. Milner doesn’t understand what economists mean by the word “efficient,” doesn’t understand the elements of the efficient markets hypothesis (EMH), and, worst, uncritically repeats nonsense from David Orrell, whose awful anti-scientific screed I reviewed here. Why oh why, as Brad DeLong likes to say, can’t we have a better press corps?

Read the rest of this entry »

Commonly econometricians conduct inference based on covariance matrix estimates which are consistent in the presence of arbitrary forms of heteroskedasticity; the associated standard errors are referred to as “robust” (also, confusingly, White, or Huber-White, or Eicker-Huber-White) standard errors. These are easily requested in Stata with the “robust” option, as in the ubiquitous

`reg y x, robust`

Everyone knows that the usual OLS standard errors are generally “wrong,” that robust standard errors are “usually” bigger than OLS standard errors, and it often “doesn’t matter much” whether one uses robust standard errors. It is whispered that there may be mysterious circumstances in which robust standard errors are smaller than OLS standard errors. Textbook discussions typically present the nasty matrix expressions for the robust covariance matrix estimate, but do not discuss in detail when robust standard errors matter or in what circumstances robust standard errors will be smaller than OLS standard errors. This post attempts a simple explanation of robust standard errors and circumstances in which they will tend to be much bigger or smaller than OLS standard errors.

A short article I wrote in 2002 regarding the novel arguments in Steve Keen’s *Debunking Economics* has been hard to track down for a while, so I’m making it available here.

Click here to download a copy (debunk.pdf).

Unfortunately, the link to Keen’s paper on the first page is broken. I attempted to get the paper from Keen’s site, but it’s now behind a paywall! I think the paper was called “A 75th Anniversary Gift for Sraffa,” but I failed to locate a copy.

**Tags:** economics

Copyright © 2016 M. Christopher Auld