Wednesday, July 29, 2009

Bad reason vs. bad facts   posted by Razib @ 7/29/2009 01:28:00 PM

One of the major issues when you discuss topics with people with whom you disagree is conflicts as to the acceptability of a particular chain of reason or line of analysis. There are usually implicit assumptions within any given analyses which need to be fleshed out, and to do so is usually time consuming. To give an example, I do not agree with the assertion that "IQ has nothing to do with intelligence." This is a very common background assumption for many people, so many analyses simply make no sense when you do, or don't, accept the viability of a concept like IQ. Talking about the issues at hand is a waste of time when there are such differences in the axioms and background structure of the models one holds, and I can understand why the temptation of extreme subjectivism emerges so often. Looking through the glass darkly can obscure the reality that beyond the glass there is a clear and distinct world.

That is why I think it is important to expose and avoid falsity of fact, however trivial. It is often much easier to agree on basic facts, especially quantitative ones. I do not say that it is alway easy, but it is certainly much easier. This is why weblogs such as The Audacious Epigone are so useful, their bread & butter is fact-checking. When blogs first began to make a splash in 2002 the whole idea of "fact checking your ass" was in vogue, but it doesn't seem like it's really worked out. What's really happened is a proliferation of Google Pundits, who know the answers they want, and know how to get those answers out of the slush pile of answers via an appropriate query. Google Punditry is not exploratory data analysis, it's fishing around for data to match your preconceptions.

Many GNXP readers may not agree with the conservative politics of The Inductivist or The Audacious Epigone, but their data-driven blog posts are often formatted such that you don't even need to read the commentary after their tables. Eight months ago Kevin Drum of Mother Jones promised to do more digging through the GSS after I'd pointed him to the resources, but it doesn't seem like it has happened. My GSS and WVS related posts at Secular Right often get picked up by mainstream pundits like Andrew Sullivan, but the utilization of the GSS or WVS interface hasn't spread. Why? One friend suggested that perhaps people fear what they might find out.

I do agree that the GSS (or WVS) aren't oracles which are infallible. There are obvious issues with representativeness in the WVS, and the small N's for some categories in the GSS mean there's a lot of noise. But with that caution aside, these objections are clear and distinct when one begins with these tools and data sets. In fact, with something like the GSS or WVS you can check your intuitions about representativeness by digging a little deeper.

Addendum: When I do GSS posts people often object in the form of "your data doesn't prove that!" Interestingly, this objection comes up even when there's a minimum of commentary. Of course the sort of surface scratches that I do don't definitively disprove or prove much, at least in general. Rather, they should be starting off points for further digging.