The incentives for finding “genes for”….

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Genes, Brains and the Perils of Publication:

I have no wish to criticize these findings as such. But the way in which this paper is written is striking. The negative results are passed over as quickly as possible. This despite the fact that they are very clear and easy to interpret – the rs1344706 variant has no effect on cognitive task performance or neural activation. It is not a cognition gene, at least not in healthy volunteers.

By contrast, the genetic association with connectivity is modest (see the graphs above – there is a lot of overlap), and very difficult to interpret, since it is clearly not associated with any kind of actual differences in behaviour.

And yet this positive result got the experiment published in no less a journal than Science! The negative results alone would have struggled to get accepted anywhere, and would probably have ended up either unpublished, or published in some rubbish minor journal and never read. It’s no wonder the authors decided to write their paper in the way they did. They were just doing the smart thing. And they are perfectly respectable scientists – Andreas Meyer-Lindenberg, the senior author, has done some excellent work in this and other fields.

Labels:

10 Comments

  1. Negative results should be published somewhere, however briefly. Otherwise, some idiot will do a ‘meta-analysis’ of published results, and find – lo! – there is a positive effect. Maybe there should be a special ‘Journal of Negative Results’?

  2. There is a short word for this: dishonesty.  
     
    Since I first wrote about the progressive, long term decline of truthfulness in science –  
     
    http://medicalhypotheses.blogspot.com/2009/02/transcendental-truth-in-science.html –  
     
    and the related phenomenon of Zombie science –  
     
    http://medicalhypotheses.blogspot.com/2008/07/zombie-science-dead-but-wont-lie-down.html - 
     
    I have been in contact with several scientists, some extremely eminent, who confirm that dishonesty, hype, spin, selectivity of reporting – in a word *lying* – is endemic in science including/ especially at high levels, and people are finding it increasingly hard to sift the wheat from the chaff.  
     
    Mortgages are not the only inflationary bubble – science is another. There is a lot *less* science going-on than appears under that designation in the journals.

  3. Depends what you mean by “perfectly respectable” I suppose. A stranger to propriety may be a competent scientist.

  4. In the popular mind and the mind of many scientists, science seems tied up with ideas of success and increased possibility. This is presumably because of the many powerful tech and medical applications. But a negative scientific result in that respect — either saying that we don’t know something, or that we can’t do something — is science too. The lesson of science isn’t just possibility — it’s constraints and limitations too.

  5. David B : There is. In fact there are several, I think. However they rely on scientists a) caring enough to submit stuff to them and b) being honest enough to submit a negative result even if it doesn’t fit with their pet theory. 
     
    bgc: Quite so. What’s fascinating about the phenomenon of selective reporting is that it is widely accepted as “part of scientific life”, yet simply fabricating results is entirely taboo. If someone was found to have fabricated even one data point, their career would be over. Whereas everyone knows that selective reporting happens and the overal effect is just the same as if people were fabricating data. (What really is the difference between making up a result at p=0.05, and doing 20 studies, finding on result at p=0.05, and only publishing that one? Well – making up the result is a lot cheaper! From a strictly economic perspective it should be preferred…)

  6. The best solution would seem to be to take science out of academia. 
     
    Politics is, as usual, the mind-killer.

  7. David B., Of course there is a Journal of Negative Results: http://www.jnr-eeb.org/index.php/jnr 
     
    I guess you knew that but maybe not all your readers.

  8. I agree with the spirit of this blog, but I disagree that the sample size was “large.” One problem with studies like these (and I won’t focus on this particular study) is that lots and lots of loci are examined but not reported. Type 1 error rate is inflated without proper adjustment. The big teams doing GWAS are finding tiny, if any, replicable effects, and the largest attempt to replicate candidate genes for schizophrenia (based upon past positive findings and theoretical relevance) failed to replicate any. Not a single one.

  9. Those are good points MB but I think 115 people is quite large a sample for the specific purposes of this experiment. 
     
    If there had been an effect of the polymorphism on cognition, they would probably have found it. Whereas if they’d had n=20, their results would have been basically useless.

a