Substack cometh, and lo it is good. (Pricing)

Writing to Science : Commenting on a blog :: Apple : X?

In Feburary, a little “Education Forum” article in Science analysed quite a bit of data to come to the conclusion that standardized test scores are a good predictor of a number of measures of success in graduate school. Check the article for the actual numbers– there are quite high (~0.4) correlations between various test scores and things like faculty reviews, publication record, GPA, and graduation. This is wholly unsurprising.

This week’s issue carries a number of responses. One is seriously on par with comments we get on posts about standardized tests on this blog (I don’t mean to offend regular commenters, but that’s not a compliment). And it’s from a professor at MIT:

The Education Forum by N. R. Kuncel and S. A. Hezlett is scientifically unsound and socially reprehensible. The authors, editors, and approving reviewers must all bear responsibility for publication of a report that is fundamentally flawed, but that, if unchallenged, could set back hard-won progress toward reducing unfair discrimination in graduate school admissions by decades. Members of admissions committees who are prone to unfairly discriminate against underrepresented graduate school applicants may use Kuncel and Hezlett’s work to justify excluding students solely on the basis of the authors’ erroneous assertion that test scores commonly evaluated for graduate school admission predict future graduate and postgraduate performance. Kuncel and Hezlett write, “Accurately predicting which students are best suited for postbaccalaureate graduate school programs benefits the programs, the students, and society at large, because it allows education to be concentrated on those most likely to profit.” Even if standardized testing could identify students who were “most likely to profit” from a graduate education, only a crude, backward society would actively seek to limit opportunity in this manner. However, standardized testing cannot do this. Kuncel and Hezlett make the elementary error of equating aggregate correlations with predictive power. Nothing in their analysis permits an admissions committee to look at an applicant’s test scores and validly predict what that student would accomplish in the graduate program or their career thereafter. In fact, even their misrepresented correlation analysis has obvious flaws. No attention is given to the likelihood that the specific test score distributions of unfairly discriminated groups will differ substantially from that of the larger majority group whose distributions predominate in the correlation data [note that this point was explcitly considered in the original paper, and the correlations hold *within* groups]. Finally, the misrepresented correlations are themselves over-stated by the authors. More than half of the variance in “later performances” cannot be attributed to the observed variance in standardized test scores.

You can walk through all the internal inconsistencies and irrationalities yourself (or read the response), but it’s a little frightening to think someone wanted to sign their name to this and send it for publication.

Posted in Uncategorized