Bad reason vs. bad facts

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

One of the major issues when you discuss topics with people with whom you disagree is conflicts as to the acceptability of a particular chain of reason or line of analysis. There are usually implicit assumptions within any given analyses which need to be fleshed out, and to do so is usually time consuming. To give an example, I do not agree with the assertion that “IQ has nothing to do with intelligence.” This is a very common background assumption for many people, so many analyses simply make no sense when you do, or don’t, accept the viability of a concept like IQ. Talking about the issues at hand is a waste of time when there are such differences in the axioms and background structure of the models one holds, and I can understand why the temptation of extreme subjectivism emerges so often. Looking through the glass darkly can obscure the reality that beyond the glass there is a clear and distinct world.

That is why I think it is important to expose and avoid falsity of fact, however trivial. It is often much easier to agree on basic facts, especially quantitative ones. I do not say that it is alway easy, but it is certainly much easier. This is why weblogs such as The Audacious Epigone are so useful, their bread & butter is fact-checking. When blogs first began to make a splash in 2002 the whole idea of “fact checking your ass” was in vogue, but it doesn’t seem like it’s really worked out. What’s really happened is a proliferation of Google Pundits, who know the answers they want, and know how to get those answers out of the slush pile of answers via an appropriate query. Google Punditry is not exploratory data analysis, it’s fishing around for data to match your preconceptions.

Many GNXP readers may not agree with the conservative politics of The Inductivist or The Audacious Epigone, but their data-driven blog posts are often formatted such that you don’t even need to read the commentary after their tables. Eight months ago Kevin Drum of Mother Jones promised to do more digging through the GSS after I’d pointed him to the resources, but it doesn’t seem like it has happened. My GSS and WVS related posts at Secular Right often get picked up by mainstream pundits like Andrew Sullivan, but the utilization of the GSS or WVS interface hasn’t spread. Why? One friend suggested that perhaps people fear what they might find out.

I do agree that the GSS (or WVS) aren’t oracles which are infallible. There are obvious issues with representativeness in the WVS, and the small N’s for some categories in the GSS mean there’s a lot of noise. But with that caution aside, these objections are clear and distinct when one begins with these tools and data sets. In fact, with something like the GSS or WVS you can check your intuitions about representativeness by digging a little deeper.

Addendum: When I do GSS posts people often object in the form of “your data doesn’t prove that!” Interestingly, this objection comes up even when there’s a minimum of commentary. Of course the sort of surface scratches that I do don’t definitively disprove or prove much, at least in general. Rather, they should be starting off points for further digging.

Labels:

13 Comments

  1. but the utilization of the GSS or WVS interface hasn’t spread. Why? One friend suggested that perhaps people fear what they might find out! 
     
    Maybe, but if everyone is clueless, then partisans of any stripe will wind up both disappointed and excited, depending on which particular topic they look into. 
     
    The blanket dismissal of using the GSS, the Statistical Abstract, or anything else, must therefore be a refusal to check facts against preconceptions at all — that very idea subverts all punditry. They just know.

  2. Epigone and I explained to Whiskey why his objections to the GSS were wrong-headed here. He at least tried to make up an objection about the GSS being unrepresentative. Lawrence Auster didn’t even accept the possibility of good enough statistics.

  3. to be fair, five thirty eight and the monkey cage fill some of the need in political questions. but in terms of general social science there seems to be an aversion to doing primary data analysis.

  4. What you should promote about the GSS is the fact that one can get all epistemological with it once and for all (sampling methods, satisfactory or unsatisfactory response rate, etc) – then use it permanently with minimal pain. 
     
    I’m not that into sociology proper, but the “you can ‘prove’ anything” problem is the same everywhere. Think of all the papers with leaky methods/syllogisms that you would toss out (or annotate skeptically) in examining whether IQ is real. Talk about labor. Imagine reading Nesbitt or Judith Rich Harris and doing a full validation on the methods of say their 40 most relied-on references. If you scrutinize and accept the GSS methods you have a friend for life.

  5. re: statistics. as others have noted in this space, the problem isn’t statistics, it’s dishonest/ignorant use of statistics. statistics is to a great extent just a formalization of normal cognitive processes (e.g., looking at trends, etc.), and also an attempt to filter out the biases of those processes. pure reliance on verbal arguments is conditional upon trust in your source. sometimes this is necessary, as for many historical facts and structures it’s onerous to dig up cliometric data to illuminate issues those who aren’t familiar with that area of history. but stuff in the GSS isn’t nearly so complex or difficult to untangle.

  6. It’s not so much the chain of reasoning but the premises, assumptions in other words. Most people won’t argue, for example, that a hypothetical syllogism is an invalid form of argument, but rather that one of the premises is false and accordingly it will bring down the whole argument. 
     
    It brings up another point that the role of logic is the structure of arguments. It’s not logic’s job to evaluate the claim, “On July 12, 2007 an asteroid hit the white house.”

  7. rameau, but we’re not dealing with formal logic in notations. so there’s a lot of, “X entails Y,” but someone is not, “no, X does not entail Y.” of course , a lot of these disagreements go back to hidden assumptions within the structure of the argument….

  8. but the utilization of the GSS or WVS interface hasn’t spread. Why? One friend suggested that perhaps people fear what they might find out!“ 
     
    My experience when I first got into GSS was that digging throgh the GSS variable to find interesting GSS variable or to find a GSS variable you can use to come up with an answer to your question can be a big chore. 
     
    Not to mention that sometimes there just isn’t a GSS variable that will let you come up with an answer to your question.

  9. charles, fair enough, but for *political* questions there variables and samples sizes are pretty numerous.

  10. It’s not logic’s job to evaluate the claim, “On July 12, 2007 an asteroid hit the white house.” But that claim can only be validly made if a certain method is followed, and logic is absolutely necessary for the verification of that procedure. 
     
    Logic can’t evaluate basic premises, but it is used to evaluate complex ones.

  11. The GSS is probably the first thing to consult on a given issue re: demographics. 
    However, if you want to survey a parameter across a population with substructure the GSS is suboptimal. Eg if you wanted to know the percentage of Asian or Jewish people who believe X with equal confidence intervals to the percentage of whites who believe X you would need to oversample Asians/Jews and undersample whites. This is because Conf intervals are proportional to 1/sqrt(N), while the ratio of whites to Asians in a representative sample is say 60:4. 
     
    Now you can’t blame the GSS for not doing stratified sampling as it is a general survey. Stratifying by ethnicity means not stratifying by class or education. There’s a tradeoff in the parameter estimation.  
     
    Still when GSS figs are quoted it would be good to put N and the CIs by the side esp. for conclusions about rare minorities.

  12. At Ghent University, third year students in Moral Philosophy have to write a paper based on an analysis (in SPSS) of WVS data. It is often very revealing to them as “reality” sometimes turns out to be different from “theory”.

  13. The GSS is probably the first thing to consult on a given issue re: demographics. 
    However, if you want to survey a parameter across a population with substructure the GSS is suboptimal. Eg if you wanted to know the percentage of Asian or Jewish people who believe X with equal confidence intervals to the percentage of whites who believe X you would need to oversample Asians/Jews and undersample whites. This is because Conf intervals are proportional to 1/sqrt(N), while the ratio of whites to Asians in a representative sample is say 60:4.
     
     
    I agree with gc, I mean, asdf on this.

a