Most of Dale and Krueger’s comments relate to the stability of estimates that suggest that women earn less after attending high-SAT Colleges. I don’t see particularly compelling evidence here either way, though Hanson is right to note that many of the estimates are consistent in nature. I was surprised by their comment, “The paper is not about gender differences from college selectivity, and we have little reason to suspect that there are such differences.” Well, all three drafts of this paper that are online emphasize the results for attending College on various subgroups — for instance, by race, parental education, and parental income. Surely gender is an equally interesting subgroup.
They do also address the selectivity question — that is, why the Barron’s selectivity measure was large and statistically significant in the working paper, but not used in the published paper. They argue that precise manner in which the Barron’s selectivity measures were coded made a huge difference, and the result was important only for one specification. I’m happy to accept this answer. But as far as the “grand conspiracy” is concerned, I’ll note that even the published paper did make a compelling case that both the identity of the school and tuition paid were hugely important in determining future income. This result, for various reasons, may still have been incomplete. Yet it was the basic message of the published paper, and it’s simply the case that the popular press did not emphasize that result. For the record, I don’t think there was any conspiracy here. But it is awfully easy to trumpet the counter-intuitive but pleasing result — the College you went to doesn’t matter!
Also on the Barron’s measure, Dale and Krueger argue:
“While we did report a 23% return associated with attending the most selective colleges (according to the 1982 Barron’s ranking) in our earliest working paper, these results were from our basic model–which does NOT adjust for student unobserved characteristics.”
Here is the relevant section from Table 7 of their working paper:
If you haven’t seen a regression table, this will be confusing. The dependent variable — what they’re testing the effects for — is a logarithm transformation of wage. They’re testing which of the variables listed on the left matter for that, and each column represents a different specification.
The first three columns select on men. The first one tests to see how these variables impact future wages, without taking into consideration other Colleges you applied to, or where you got in. This is the “basic model,” and the .0234 here next to “Most Competitive” corresponds to the 23% return they mention above (relative to the lowest category of selectivity). But skip over to column 3. This “self-revelation” model is designed to get at student unobserved characteristics. As the authors write:
“The effect of the Barron’s rating is more robust to our attempts to adjust for unobserved school selectivity than the average-school SAT score. Based on the straightforward regression results in column 1, men who attend the most competitive schools earn 23% more than men who attend very competitive colleges, other variables in the equation being equal. In the self-revelation model, the gap is 13 percent… [An] F-test of the null hypothesis that the Barron’s ratings jointly have no effect on earnings is rejected at the .05 level in the matched applicant model for men.”
Now, this was in response to Hanson’s point. Hanson picked up on the 23% number, and Dale and Krueger are right to note that’s a little high (and Hanson is right to concede). But note that the very next sentence reports results from a specification which does adjust for student unobserved characteristics; and it is also quite high.
Finally, I’ll note that while the authors emphasize the significance (or lack of significance) for individual estimates in individual years, my simple calculations suggest that the aggregate, pooled effect of their variables might be quite large in economic importance.