Basa beats catfish

In False Economy: A Surprising Economic History of the World* there’s a chapter which covers “The Catfish War” between Vietnam and the United States in the early 2000s. Basically Vietnamese catfish were cheaper than American catfish, so American farmers got the government to force the Vietnamese to not label the fish catfish (it’s a different species from the American variant). So Vietnamese catfish are now termed “basa” in the United States. Interestingly this might have backfired, the author of False Economy claims that many American consumers ended up thinking basa were an exotic premium import. But here’s another reality: in blind taste tests people prefer Vietnamese catfish to American catfish.

I only mention this because I’ve been getting basa for a few weeks now. Today the supermarket was out of basa, but did have American catfish (where there used to be basa). So I got American catfish because I figured catfish is catfish. Well…American catfish kind of sucks compared to basa. I don’t find catfish meat repellent or anything, but basa has a much nicer flavor and smell than American catfish. It’s also easier to cook. And I don’t have a subtle palette; I use a lot of hot sauce, so I can tolerate a large range in flavor. There just isn’t any comparison. Perhaps it was a bad batch of catfish, but I’ve actually had catfish sandwiches and the like in New Orleans and Houston, and I think this was typical American catfish thinking back to that. Wikipedia said that people prefer basa to American catfish 3:1, but I would have expected 10:1.

* It’s a well written work which illustrates general economic principles with concrete contemporary examples, but is far inferior to Rondo Cameron’s A Concise Economic History of the World in terms of factual density.

Michael Jackson: the king is dead

I was out and about doing errands when a friend called me to tell me that Michael Jackson had died. My first reaction was to utter an expletive. I wasn’t sad, I didn’t think this was a false report. I didn’t know how to react. It’s as if a friend calls you and tells you that the Rocky Mountains had disappeared. The very configuration of the pop culture firmament has shifted before our very eyes. Jackson’s music career had long waned in the United States, for most of my lifetime he’d been more of a cultural than musical phenomenon. I didn’t think of Michael Jackson very often, but I always assumed he’d be around as a background condition. I noticed that even the professional sidewalk signature gatherers were departing from their script and were chatting up strangers about Jackson’s death instead of the environment or whatever they normally talked up.

Lamarckism: Lessons from History

In my recent post on Darwin’s mechanisms of evolution I was rather dismissive about the Inheritance of Acquired Characteristics (IAC), commonly known as ‘Lamarckism’. Darwin himself believed in the existence of IAC but gave it a relatively minor role in evolution. In comments on my post it was pointed out that there has recently been some revival of interest in IAC in the form of ‘transgenerational epigenetics’. For a recent review see here. [Note added: as originally posted I somehow inserted the wrong link. Hope this one is now correct.]

Even if all of these reports are true, they don’t (yet) amount to more than a small tweaking of evolutionary theory. The main examples seem more like congenital syphilis than ‘Lamarckism’ in the traditional sense: an animal is exposed to a substance that happens to affect the germ cells as well as the rest of the body. No big deal. But I think biologists should be cautious about accepting such reports without clear independent replication, for two reasons. First, because ‘extraordinary claims need extraordinary evidence’. Second, because there is a long and dreary history of unsubstantiated, unreliable, and downright fraudulent claims about IAC. As much of this history is now generally forgotten, it may be useful to recall some of the ‘highlights’.

CHARLES-EDOUARD BROWN-SEQUARD

Brown-Sequard was a distinguished if eccentric French physiologist. He achieved scientific fame for the discovery of what are now known as hormones, and notoriety when he claimed that life could be rejuvenated by the injection of crushed animals’ testicles. At least he was willing to try it on himself. But the present point of interest is in his neurological experiments. For many years he experimented on thousands of guinea pigs, mainly by severing various nerves. He claimed that the untreated offspring of the experimental subjects showed certain symptoms, such as a liability to epileptic fits, which resembled those of the parents. Charles Darwin accepted the evidence as proving that IAC was at least possible. Darwin’s younger friend George Romanes, a supporter of IAC, spent years trying to replicate the experiments, and claimed some slight success, but admitted that on the whole the results were negative. Brown-Sequard’s results have never been conclusively explained, but unexplained results are not unusual in science. The Germans have the useful term ‘Dreckeffekt’ for this kind of thing.

W. L. TOWER

[Apology: I first gave the name as ‘William Edward Tower’, but on checking my source again I find the initials are ‘W. L.’. Apologies if anyone has wasted time following up the incorrect name.]

Tower was an American entomologist who claimed in the early 1900s to have produced inheritable mutations in beetles by changes in temperature and humidity. The pioneer geneticist William Bateson questioned Tower’s results and became increasingly critical, hinting at fraud. Tower admitted that there were errors in his reports, and claimed that his original records had been destroyed by a fire in his greenhouse. Hmm.

PAUL KAMMERER

Kammerer’s experiments on the Midwife Toad, made famous in a sympathetic book by Arthur Koestler, led to tragedy when it was discovered that the specimens had been artificially tampered with, and Kammerer committed suicide. There was certainly skullduggery by someone, though whether by Kammerer himself remains controversial.

JOHN HESLOP-HARRISON

Heslop-Harrison was an English botanist and entomologist who claimed to have induced heritable melanism in insects by chemical treatments. His claims were questioned by Haldane, Fisher and others. There must be suspicion of fraud, as Heslop-Harrison later became notorious for the unconnected allegation that he (literally) planted evidence on the Scottish island of Rhum to support his botanical theories. [NB Heslop-Harrison must not be confused with his still living son of the same name, also a botanist. ]

WILLIAM McDOUGALL

McDougall was a leading psychologist in the first half of the 20th century. He carried out a long series of experiments on rats which seemed to show that successive generations became better and better at learning mazes. I don’t think anyone has suggested fraud, but subsequent attempts at replication pointed out a major defect in his methodology: the absence of a control group. When a control group was used, whose ancestors had not been trained, they showed much the same patterns of improvement – or non-improvement – as the experimental subjects themselves (see here.) The improvement therefore seems to have been due to some other factor or factors, such as better laboratory or cage conditions, and not to IAC.

TROFIM LYSENKO

No need to comment.

EDWARD STEELE

Edward Steele is an Australian immunologist who claimed in the 1970s to have produced inheritable immunological responses in mice. This led to a predictable spate of ‘Darwin was wrong’ and ‘Back to Lamarck’ news reports. Less publicity was given to at least three independent replication attempts with negative results.

The moral is – oh, draw your own.

Google books is great!

Mark Gimein defends Google Books over at The Big Money. New technology can be misused, but in general I tend to agree with Gimein. Along with Amazon’s Search Inside feature Google Books is an excellent resource to look up and cross-reference obscure facts and data. With the utilization of Google Translate you can even get a good sense of some books in languages you don’t know (I generally use this to make sure I understand the legend for a table or figure).

Monopoly allows innovation to flourish

Updated

This may be old hat for some readers, but it’s worth reviewing and providing some good new data for. The motivation is the idea that monopoly-haters have that when some company comes to dominate the market, they will have no incentive to change things — after all, they’ve already captured most of the audience. The response is that industries where invention is part of the companies’ raison d’etre attract dynamic people, including the executives.

And such people do not rest on their laurels once they’re free from competition — on the contrary, they exclaim, “FINALLY, we can breathe free and get around to all those weird projects we’d thought of, and not have to pander to the lowest common denominator just to stay afloat!” Of course, only some of those high-risk projects will become the next big thing, but a large number of trials is required to find highly improbable things. When companies are fighting each other tooth-and-nail, a single bad decision could sink them for good, which makes companies in highly competitive situations much more risk-averse. Conversely, when you control the market, you can make all sorts of investments that go nowhere and still survive — and it is this large number of attempts that boosts the expected number of successes.

With that said, let’s review just a little bit of history impressionistically, and then turn to a new dataset that confirms the qualitative picture.

Taking only a whirlwind tour through the pre-Information Age time period, we’ll just note that most major inventions could not have been born if the inventor had not been protected from competitive market forces — usually from protection by a monopolistic and rich political entity. Royal patronage is one example. And before the education bubble, there weren’t very many large research universitities in your country where you could carry out research — for example, Oxford, Cambridge, and… well, that’s about it, stretching back 900 years. They don’t call it “the Ivory Tower” for nothing.

Looking a bit more at recent history, which is most relevant to any present debate we may have about the pros and cons of monopolies, just check out the Wikipedia article on Bell Labs, the research giant of AT&T that many considered the true Ivory Tower during its hey-day from roughly the 1940s through the early 1980s. From theoretical milestones such as the invention of information theory and cryptography, to concrete things like transistors, lasers, and cell phones, they invented the bulk of all the really cool shit since WWII. They were sued for antitrust violations in 1974, lost in 1982, and were broken up by 1984 or ’85. Notice that since then, not much has come out — not just from Bell Labs, but at all.

The same holds true for the Department of Defense, which invented the modern airliner and the internet, although they made large theoretical contributions too. For instance, the groundwork for information criteria — one of the biggest ideas to arise in modern statistics, which tries to measure the discrepancy between our scientific models and reality — was laid by two mathematicians working for the National Security Agency (Kullback and Leibler). And despite all the crowing you hear about the Military-Industrial Complex, only a pathetic amount actually goes to defense (which includes R&D) — most goes to human resources, AKA bureaucracy. Moreover, this trend goes back at least to the late 1960s. Here is a graph of how much of the defense outlays go to defense vs. human resources (from here, Table 3.1; 2008 and beyond are estimates):


There are artificial peaks during WWII and the Korean War, although it doesn’t decay very much during the 1950s and ’60s, the height of the Cold War and Vietnam War. Since roughly 1968, though, the chunk going to actual defense has plummeted pretty steadily. This downsizing of the state began long before Thatcher and Reagan were elected — apparently, they were jumping on a bandwagon that had already gained plenty of momentum. The key point is that the state began to give up its quasi-monopolistic role in doling out R&D dollars.

Update: I forgot! There is a finer-grained category called “General science, space, and technology,” which is probably the R&D that we care most about for the present purposes. Here is a graph of the percent of all Defense outlays that went to this category:


This picture is even clearer than that of overall defense spending. There’s a surge from the late 1950s up to 1966, a sharp drop until 1975, and a fairly steady level from then until now. This doesn’t alter the picture much, but removes some of the non-science-related noise from the signal. [End of update]

Putting together these two major sources of innovation — Bell Labs and the U.S. Defense Department — if our hypothesis is right, we should expect lots of major inventions during the 1950s and ’60s, even a decent amount during the 1940s and the 1970s, but virtually squat from the mid-1980s to the present. This reflects the time periods when they were more monopolistic vs. heavily downsized. What data can we use to test this?

Popular Mechanics just released a neat little book called Big Ideas: 100 Modern Inventions That Have Changed Our World. They include roughly 10 items in each of 10 categories: computers, leisure, communication, biology, convenience, medicine, transportation, building / manufacturing, household, and scientific research. They were arrived at by a group of around 20 people working at museums and universities. You can always quibble with these lists, but the really obvious entries are unlikely to get left out. There is no larger commentary in the book — just a narrow description of how each invention came to be — so it was not conceived with any particular hypothesis about invention in mind. They begin with the transistor in 1947 and go up to the present.

Pooling inventions across all categories, here is a graph of when these 100 big ideas were invented (using 5-year intervals):


What do you know? It’s exactly what we’d expected. The only outliers are the late-1990s data-points. But most of these seemed to be to reflect the authors’ grasping at straws to find anything in the past quarter-century worth mentioning. For example, they already included Sony’s Walkman (1979), but they also included the MP3 player (late 1990s) — leaving out Sony’s Discman (1984), an earlier portable player of digitally stored music. And remember, each category only gets about 10 entries to cover 60 years. Also, portable e-mail gets an entry, even though they already include “regular” e-mail. And I don’t know what Prozac (1995) is doing in the list of breakthroughs in medicine. Plus they included the hybrid electric car (1997) — it’s not even fully electric!

Still, some of the recent ones are deserved, such as cloning a sheep and sequencing the human genome. Ove
rall, though, the pattern is pretty clear — we haven’t invented jackshit for the past 30 years. With the two main monopolistic Ivory Towers torn down — one private and one public — it’s no surprise to see innovation at a historic low. Indeed, the last entries in the building / manufacturing and household categories date back to 1969 and 1974, respectively.

On the plus side, Microsoft and Google are pretty monopolistic, and they’ve been delivering cool new stuff at low cost (often for free — and good free, not “home brew” free). But they’re nowhere near as large as Bell Labs or the DoD was back in the good ol’ days. I’m sure that once our elected leaders reflect on the reality of invention, they’ll do the right thing and pump more funds into ballooning the state, as well as encouraging Microsoft, Google, and Verizon to merge into the next incarnation of monopoly-era AT&T.

Maybe then we’ll get those fly-to-the-moon cars that we’ve been expecting for so long. I mean goddamn, it’s almost 2015 and we still don’t have a hoverboard.

Duffy and malaria in baboons?

So after my wingeing about the quality of genetic associations found through candidate gene studies, it’s only appropriate that I point to a fun candidate gene association study published this week in Nature.

The interesting point here is that the organism isn’t humans, but rather baboons, and the phenotype is susceptibility to malaria. Briefly, the authors find that a SNP in the promoter of the Duffy locus (recall that a mutation that abolishes the expression of Duffy in humans leads to protection from Plasmodium vivax and is one of the best characterized instances of recent positive selection in our species) appears to lead to protection from a malaria-like disease in baboons. The authors seem to really, really want this polymorphism to also be under selection in baboons (to complete the parallel story to humans), but they can’t bring themselves to say the evidence is anything more that “suggestive” (and to be honest, even that may be wishful thinking).

So is the association true? The study suffers from the same problem of candidate gene studies mentioned before, in that it’s small and the evidence for an association is fairly weak. If I had to bet, I’d guess no, the association isn’t real. But collecting and genotyping a large sample of baboons is simply not feasible at this point (if it ever will be), so this is what’s possible, and it’s a kind of fun, suggestive study that would be really cool if it ends up being true.