Tuesday, February 24, 2009

Male superiority at chess and science cannot be explained by statistical sampling arguments   posted by agnostic @ 2/24/2009 05:37:00 PM
Share/Bookmark

A new paper by Bilalic et al. (2009) (read the PDF here), tries to account for male superiority in chess by appealing to a statistical sampling argument: men make up a much larger fraction of chess players, and that the n highest extreme values -- say, the top ranked 100 players -- are expected to be greater in a large sample than in a small one. In fact, this explanation is only a rephrasing of the question -- why are men so much more likely to dedicate themselves to chess.

Moreover, data from other domains where men and women are equally represented in the sample, or where it's women who are overrepresented in the sample, do not support the hypothesis -- men continue to dominate, even when vastly underrepresented, in domains that rely on skills that males excel in compared to females. I show this with the example of fashion designers, where males are hardly present in the sample overall but thrive at the elite level.

First, the authors review the data that male chess players really are better than female ones (p.2):

For example: not a single woman has been world champion; only 1 per cent of Grandmasters, the best players in the world, are female; and there is only one woman among the best 100 players in the world.


The authors then estimate the male superiority at rank n, from 1 to 100, using the entire sample's mean and s.d., and the fraction of the sample that is male and female. Here is how the real data compare to this expectation (p.2):

Averaged over the 100 top players, the expected male superiority is 341 Elo points and the real one is 353 points. Therefore 96 per cent of the observed difference between male and female players can be attributed to a simple statistical fact -- the extreme values from a large sample are likely to be bigger than those from a small one.


Therefore (p. 3):

Once participation rates of men and women are controlled for, there is little left for biological, environmental, cultural or other factors to explain. This simple statistical fact is often overlooked by both laypeople and experts.


Of course, this sampling argument doesn't explain anything -- it merely pushes the question back a level. Why are men 16 times more likely than women to compete in chess leagues? We are back to square one: maybe men are better at whatever skills chess tests, maybe men are more ambitious and competitive even when they're equally skilled as women, maybe men are pressured by society to go into chess and women away from it. Thus, the question staring us in the face has not been resolved at all, but merely written in a different color ink.

The authors are no fools and go on to mention what I just said. They then review some of the arguments for and against the various explanations. But this means that their study does not test any of the hypotheses at all -- aside from rephrasing the problem, the only portion of their article that speaks to which answer may be correct is a two-paragraph literature review. For example, maybe females on average perform poorer on chess-related skills, and so weed themselves out more early on, in the same way that males under 6'3 would be more likely to move on and find more suitable hobbies than basketball, compared to males above 6'3. Here is the authors' response to this hypothesis (p. 3, my emphasis):

Whatever the final resolution of these debates [on "gender differences in cognitive abilities"], there is little empirical evidence to support the hypothesis of differential drop-out rates between male and females. A recent study of 647 young chess players, matched for initial skill, age and initial activity found that drop-out rates for boys and girls were similar (Chabris & Glickman 2006).


Well no shit -- they removed the effect of initial skill, and thus how well suited you are to the hobby with no preparation, and so presumably due to genetic or other biological factors. And they also removed the effect of initial activity, and thus how enthusiastic you are about the hobby. And when you control for initial height, muscle mass, and desire to compete, men under 6'3 are no more or less likely to drop out of basketball hobbies than men over 6'3. How stupid do these researchers think we are?

So, this article really has little to say about the question of why men excel in chess or science, and it's baffling that it got published in the Proceedings of the Royal Society. The natural inference is that it was not chosen based on how well it could test various hypotheses -- whether pro or contra the Larry Summers ideas -- but in the hope that it would convince academics that there is really nothing to see here, so just move along and get home because your parents are probably worried sick about you.

Now, let's pretend to do some real science here. The authors' hypothesis is that the pattern in chess or science can be accounted for by their statistical sampling argument -- but of course, men dominate all sorts of fields, including where they're about as equally represented in the pool of competitors, and even when they're outnumbered in that pool. Occam's Razor requires us to find a simple account of all these patterns, not postulating a separate one for each case. The simple explanation is that men excel in these fields due to underlying differences in genes, hormones, social pressures, or whatever.

The statistical sampling argument can only capture one piece of the pattern -- male superiority where males make up more of the sample. Any of the non-sampling hypotheses, including the silly socio-cultural ones, at least are in the running for accounting for the big picture of male dominance regardless of their fraction of the sample.

To provide some data, I direct you to an analysis I did three years ago of male vs. female fashion designers. Here, I'll consider "the sample of fashion designers" to be students at fashion schools since that's what the data were. Fashion students are the ones who will make up the pool of fashion designers upon graduating. I included four measures of eminence: 1) being chosen to enter the Council of Fashion Designers of America, 2) having an entry in two major fashion encyclopedias, both edited by women (Who's Who in Fashion, and The Encyclopedia of Clothing and Fashion), 3) having their collections listed on Vogue's website, and 4) winning the highest award of the CFDA, the Perry Ellis awards for emerging talent.

The male : female ratio in the pool of fashion students is 1 : 13 at Parsons and 1 : 5.7 at FIT. So, the female majority in the sample of fashion designers is not quite as extreme as that of males in chess leagues, but pretty close. The statistical sampling argument predicts that females should out-number males at the top. But they don't -- the M : F ratios for the four measures above are, respectively, 1.29 : 1, 1.5 : 1 and 1.9 : 1, 1.8 : 1, and 3.6 : 1. Again, this isn't as extreme as male superiority in chess, but recall that males are so underrepresented in the sample to begin with!

(For other design fields that males tend to have greater interest in, such as architecture, the M : F ratios among the winners of the Pritzker Prize and the AIA Gold Medal are, respectively, 27 : 1 and 61 : 0).

The authors statistical sampling argument is not a null hypothesis that we reject or fail to reject in any particular case -- rejecting it in fashion design, and failing to reject in chess. It is not a hypothesis at all, but simply a rephrasing of the observation that men dominate certain fields, only measuring this by their greater participation rates. Again, it does not address why males are so much more likely to participate in chess leagues to begin with, which could be due to any of the existing hypotheses about male superiority. The point is that it is a widespread phenomenon that requires a single explanation applying across domains.

I find the genetic and hormonal influences on the mean and variance of cognitive ability and personality traits to be the most promising (just search our archives for relevant keywords to find the discussions). But this study of chess players offers nothing new to the debate, and could not do so even in principle, as it doesn't make a novel hypothesis, apply a novel test to existing data, or apply existing tests on novel data. You can reformulate the observation or problem however you please, but that doesn't make the testing of hypotheses go away.

Reference:

Bilalic, Smallbone, McLeod, and Gobet (2009). Why are (the best) Women so Good at Chess? Participation Rates and Gender Differences in Intellectual Domains. Proc. R. Soc. B, 276, 1161–1165.

Labels: , , ,




Tuesday, July 08, 2008

Summers part 29,476   posted by Herrick @ 7/08/2008 09:43:00 PM
Share/Bookmark

Slate has been having a debate on sex differences. Along the way, they hit on a key Summers issue: The apparent higher male variability of math scores. Shaffer, the author, refers to the classic Feingold piece, a cross-cultural meta-study of the variability of mental abilities across genders. Shaffer makes the common claim that there are data on both sides--sometimes women have a higher variance, and sometimes the men do. But is the difference statistically significant?

I did a simple analysis of Feingold's data from 54 math tests from 20 countries, and 19 tests of spatial ability from 9 countries. I ran least squares and least absolute deviation tests.

Here are the p-values for the restriction that men and women have equal variability:

Math, least squares: p<0.1%
Math, least absolute deviation: p<5%
Spatial, least squares: p<10%
Spatial, least absolute deviation: p=11% (but only 19 observations!)

OK, so it's reasonable to conclude that men have higher variability in this cross country sample, and that the cases of greater female variability are just flukes. But are the differences quantitatively significant, not just statistically significant?

Feingold, the author of the study, says no. He notes: "The median V[ariance]R[atio] of 1.09 indicated greater male variability [on math tests]," then claims that "the magnitude of the gender difference was trivial." Not so. Excel will show you that three or four standard deviations above the mean--Larry Summers territory to be sure--that's enough to get you a 2-to-1 one ratio. And that's with no difference in means whatsoever.

This paper (Table 1, page 10) works out the rough gender ratios you'd expect to see under various assumptions for means and variances. The bottom line is no surprise: with small differences in means plus small differences in variance, you can get big results: 4 to 1 ratios are easy to come by, and 10 to 1 are plausible. Yes, yes, further research is needed, but most of the research is pointing in the same direction. And we Bayesians know what to do when research mostly points in one direction....

(Oh, and the median male/female variance ratio for spatial ability in Feingold's data is 1.14. And yes, none of this gets at genes v. culture. But let's start with the journalism before we head to causation.)

Labels: ,




Friday, January 25, 2008

What needs to change in academia?   posted by the @ 1/25/2008 06:08:00 PM
Share/Bookmark



more

Labels: , , , ,