Proposed for discussion: We should all stop reporting regression results with one or more asterixes for significance levels, and just give standard errors instead.
Why?
First, because use of the stars confounds statistical and economic significance, as then-Donald McCloskey so nicely put it in that classic article. An estimate may be more than two standard errors from zero, but still too small to be economically important. Conversely, it may be less than two standard errors from zero, but still convey useful information, since zero is not necessarily the relevant null. (And this is leaving aside the problem that standard errors become increasingly hard to interpret the more regressions you run, and these days people run a lot of regressions.)
Second, and even more seriously, because it leads to a focus on qualitative rather than quantitative results, as Deirdre McCloskey so damningly laid out in this recent pamphlet. I reckon tehre are far more interesting economic questions that take the form of how much rather than whether, but the habit of reporting significance levels rather than standard errors implicitly assumes that you are only interested in whether questions — specifically, whether or not the effect predicted by theory exists. Significance levels don’t give you any help in determining whether two estimates are consistent. They’re suited for qualitative, abstract-formal work but not for concrete, historical or policy-oriented work.
I don’t claim any of these observations are original. I’d even say they were commonplace — except why, then, do people insist on scattering those stupid little stars all over their tables, instead of just reporting the (much more informative) standard errors?