Substack cometh, and lo it is good. (Pricing)

Genomic ancestry tests are not cons, part 1

As someone who is part of the personal genomics sector, I keep track of media representations of the industry very closely. There is the good and the bad, some justified and some not.

But there is one aspect which I need to weigh in on because it is close to my interests and professional focus, and it is one where I have a lot of experience: ancestry inference on human data.

Periodically I see in my Twitter timeline an article shared by a biologist which is filled with either misrepresentation, confusions, and even falsehoods. Of course, some of the criticisms are correct. The problem is that when you mix truth and falsehood or sober analysis and critique with sensationalism the whole product is debased.

I’m going to address some of the most basic errors and misimpressions. This post is “part 1” because I might have follow-ups, as I feel like this is a situation where I have to put out fires periodically, as people write about things they don’t know about, and then those articles get widely shared to a credulous public.

First, if an article mentions STRs or microsatellites or a test with fewer than 1,000 markers in a direct to consumer genomic context, ignore the article. This is like an piece where the author dismisses air travel because it’s noisy due to propeller-driven planes. Propeller-driven planes are a very small niche. Similarly, the major direct to consumer firms which have sold close to ~10 million kits do not use STRs or microsatellites, very much a technology for the 1990s and 2000s. Any mention of STRs or microsatellites or low-density analyses indicate the journalist didn’t do their homework, or simply don’t care to be accurate.

Second, there is constant harping on the fact that different companies give different results. This is because tests don’t really give results as much is interpretations. The raw results consist of your genotype. On the major SNP-chip platforms this will be a file on the order of 20 MBs. The companies could provide this as the product, but most humans have difficulty grokking over 100,000 variables.

So what’s the solution? The same that scientists have been using for decades: reduce the variation into a much smaller set of elements which are human digestible, often through tables or visualization.

For example, consider a raw data set consisting of my three genotypes from 23andMe, Ancestry, and Family Tree DNA. Merged with public data these are ~201,000 single nucleotide markers. You can download the plink formatted data yourself and look at it. The PCA below shows where my three genotypes are positioned, by the Tamil South Asians. Observe that my genotypes are basically at the same point:

The differences between the different companies have nothing to do with the raw data, because with hundreds of thousands of markers they capture enough of the relevant between population differences in my genome (do you need to flip a coin 1 million times after you’ve flipped it 100,000 times to get a sense of whether it is fair?). The law of large numbers is kicking in at this point, with genotyping errors on the order of 0.5% not being sufficient to differentiate the files.

Sure enough raw genotype files of the three services match pretty closely. 99.99% for Family Tree DNA and 23andMe, 99.7% for Family Tree DNA and Ancestry, and 99.6% for Ancestry and 23andMe. For whatever reason Ancestry is the outlier here. My personal experience looking at genotype data from Illumina chips is that most are pretty high quality, but it’s not shocking to see instances with 0.5% no call or bad call rates. For phylogenetic purposes if the errors are not systematic it’s not a big deal.

The identity to other populations is consistent. About 74% to Tamils. 72-73% for other Eurasians. 71% for the Surui, an isolated Amazonian group. And 69% to Yoruba. Observe that this recapitulates the phylogenetic history of what we know for the population which I am from, Bengalis. The greater the genetic distance between two populations due to distinct evolutionary histories the greater the genetic divergence. This is not rocket science. This gets to the point that the raw results make a lot more sense when you integrate and synthesize them with other information you have. Most customers are not going into the process of getting a personal genomic ancestry test blind…but that causes pitfalls as well as opportunities.

But most people do not receive statistics of the form:

SNP Identity
YouYoruba0.69
YouGerman0.72
YouJapanese0.73
YouTamil0.74

Mind you, this is informative. It’s basically saying I am most genetically distant from Yoruba and closer in sequence to Tamils. But this is somewhat thin gruel for most people. Consider the below which is a zoom in of PC 2 vs. PC 4. I am blue and the purple/pink are Tamils, and the population at the bottom left are East Asians.

If you looked at enough PCA plots it will become rather clear I am shifted toward East Asians in comparison to most other South Asians. The high identity that I have with Japanese and Dai is due in part to the fact that I have relatively recent admixture from an East Asian population, above and beyond what is typical in South Asians. Remember, all three of my genotypes are basically on the same spot on PCA plots. That’s because they’re basically the same. Genotyping error is rather low.

How do we summarize this sort of information for a regular person? The standard method today is giving people a set of proportions with specific population labels. Why? People seem to understand population labels and proportions, but can be confused by PCA plots. Additionally, the methods that give out populations and proportions are often better at capturing pulse admixture events relatively recent in time than PCA, and for most consumers of ancestry services, this is an area that they are particularly focused on (i.e., Americans).

An easy way to make one’s genetic variation comprehensible to the general public is to model them as a mixture of various populations that they already know of. So consider the ones above in the plink file. I ran ADMIXTURE in supervised model progressively removing populations for my three genotypes. The results are below.

 DaiDruzeGermanJapanesePapuanSardinianSuruiTamilYoruba
Razib23andMe11%3%8%4%1%0%1%73%1%
RazibAncestry10%2%8%4%1%0%1%73%1%
RazibFTDNA11%2%8%3%1%0%1%72%1%
          
 DaiDruzeGermanJapanesePapuanSardinianSuruiTamil 
Razib23andMe11%3%8%4%1%0%1%73% 
RazibAncestry10%3%8%4%1%0%1%74% 
RazibFTDNA11%3%8%3%1%0%1%73% 
          
 DaiDruzeJapanesePapuanSuruiTamil   
Razib23andMe10%9%4%1%1%74%   
RazibAncestry10%9%4%1%1%75%   
RazibFTDNA11%9%4%1%1%74%   
          
 DaiJapaneseSuruiTamil     
Razib23andMe11%4%1%84%     
RazibAncestry10%4%1%85%     
RazibFTDNA11%3%1%84%    

Please observe again that they are broadly congruent. These methods exhibit a stochastic element, so there is some noise baked into the cake, but with 200,000+ markers and a robust number of reference populations the results come out the same across all methods (also, 23andMe and Family Tree DNA seem to correlate a bit more, which makes sense since these two genotypes are more similar to each other than they are to Ancestry).

Observe that until I remove all other West Eurasian populations the Tamil fraction in my putative ancestry is rather consistent. Why? Because my ancestry is mostly Tamil-like, but social and historical evidence would point to the likelihood of some exogenous Indo-Aryan component. Additionally, seeing as how very little of my ancestry could be modeled as West African removing that population had almost no impact.

When there were three West Eurasian populations, Germans, Druze, and Sardinians, the rank order was in that sequence. Removing Germans and Sardinians and the Druze picked up most of that ancestral component. This a supervised method, so I’m assigning the empirical populations as reified clusters which can be used to reconstitute the variation you see in my own genotype. No matter what I put into the reference data, the method tries its best to assign proportions to populations.

The question then comes into the stage of subtle choices one makes to obtain the most informative inferences for the customer. These are not always matters of different results in terms of accuracy or precision, but often of presentation. If West Eurasian populations are removed entirely, my Tamil fraction inflates. That’s the closest to the West Eurasian populations left in the data. In contrast, the East Asian fraction remains the same because I’ve left the two proxy populations in the data (I rigged the die here because I know I have Tibeto-Burman admixture which is a combination of Northeast and Southeast Asian).

Let’s do something different. I’m going to swap out the West Eurasian populations with equivalents.

 ArmeniansDaiFrench_BasqueJapaneseMandenkaSuruiSwedenTamil
Razib23andMe6%11%0%4%1%1%5%72%
RazibAncestry5%11%0%4%1%1%5%73%
RazibFTDNA6%11%0%4%1%1%5%72%
         
GermanPapuanYoruba     
Razib23andMe68%20%13%     
RazibAncestry68%20%13%     
RazibFTDNA68%20%13%     
         
French_BasqueTamil      
Razib23andMe8%92%      
RazibAncestry7%93%      
RazibFTDNA8%92%      
         
TamilYoruba      
Razib23andMe97%3%      
RazibAncestry97%3%      
RazibFTDNA97%3%     

I have no ancestry from French Basque, but I do have ancestry from Armenians and Swedes in this model. Why? If you keep track of the most recent population genomic ancestry this all makes sense. But if you don’t, well, it’s harder to unpack. This is part of the problem with these sorts of tests: how to make it comprehensible to the public while maintaining fidelity to the latest research.

This is not always easy, and differences between companies in terms of interpretation are not invidious as some of the press reports would have you think, but a matter of difficult choices and trade-offs one needs to make to give value to customers. True, this could all be ironed out if there was a ministry of genetic interpretation and a rectification of names in relation to population clusters, but right now there isn’t. This allows for both brand differentiation and engenders confusion.

In most of the models with a good number of populations, my Tamil ancestry is in the low 70s. Notice then that some of these results are relatively robust to the populations one specifies. Some of the patterns are so striking and clear that one would have to work really hard to iron them out and mask them in interpretation. But what happens when I remove Tamils and include populations I’m only distantly related to? This is a ridiculous model, but the algorithm tries its best. My affinity is greatest to Germans, both because of shared ancestry, and in the case of Papuans, their relatively high drift from other East Eurasians and Denisovan ancestry. But both Papuan and Yoruba ancestry are assigned because I’m clearly not 100% German, and I share alleles with both these populations. In models where there are not enough populations to “soak up” an individual’s variation, but you include Africans, it is not uncommon for African ancestry to show up at low fractions. If you take Europeans, Africans, and East Asians, and force two populations out of this mix, then Europeans are invariably modeled as a mix of Africans and East Asians, with greater affinity to the latter.

Even when you model my ancestry as Tamil or Yoruba, you see that there is a Yoruba residual. I have too much genetic variation that comes from groups not closely related to the variation you find in Tamils to eliminate this residual.

Just adding a few populations fixes this problem:

 DaiTamilYoruba 
Razib23andMe14%83%2% 
RazibAncestry14%84%2% 
RazibFTDNA14%83%2% 
     
 DaiGermanTamilYoruba
Razib23andMe15%10%74%1%
RazibAncestry14%9%75%1%
RazibFTDNA15%10%74%1%

Notice how my Tamil fraction is almost the same as when I had included in many more reference populations. Why? My ancestral history is complex, like most humans, but it’s not that complex. The goal for public comprehensibility is to reduce the complexity into digestible units which give insight.

Of course, I could just say read Inference of Population Structure Using Multilocus Genotype Data. The basic framework was laid out in that paper 17 years ago for model-based clustering of the sort that is very common in direct to consumer services (some use machine learning and do local ancestry decomposition across the chromosome, but really the frameworks are an extension of the original logic). But that’s not feasible for most people, including journalists.

Consider this piece at Gizmodo, Why a DNA Test Is Actually a Really Bad Gift. I pretty much disagree with a lot of the privacy concerns, seeing as how I’ve had my public genotype downloadable for seven years. But this portion jumped out at me: “Ancestry tests are based on sound science, but variables in data sets and algorithms mean results are probabilities, not facts, as many people expect.”

Yes, there are probabilities involved. But if a DNA test using the number of markers above tells you you are 20% Sub-Saharan African and 80% European in ancestry, that probability is of the same sort of confidence of you determining that a coin flip is fair after 100,000 flips. True, you can’t be totally sure after 100,000 flips that you have a fair coin, but you can be pretty confident. With hundreds of thousands of markers, a quantum of 20% Sub-Saharan African in a person of predominantly European heritage is an inference made with a degree of confidence that verges upon certitude within a percentage or so.

As for the idea that they are not “facts.” I don’t even know what that means in this context. And I doubt the journalist does either. Which is one of my main gripes with these sorts of stories: unless they talk to a small subset of scientists the journalists just don’t know what they are talking about when it comes to the statistical genetics.

Finally, there is the issue about what does it even mean to be % percent of population X, Y, or Z? Even many biologists routinely reify and confuse the population clusters with something real and concrete in a Platonic sense. But deep down when you think about it we all need to recall we’re collapsing genealogies of many different segments of DNA into broad coarse summaries when we say “population.” And populations themselves are by their nature often somewhat open and subject to blending and flow with others. A population genomic understanding of structure does not bring into clarity Platonic facts, but it gives one instruments and tools to smoke out historical insight.

The truth, in this case, is not a thing in and of itself, but a dynamic which refines our intuitions of a fundamentally alien process of Mendelian assortment and segregation.

8 thoughts on “Genomic ancestry tests are not cons, part 1

  1. STRs are still used for genetic genealogy for Y-DNA testing. The Y-DNA database at Family Tree DNA is based on Y-STR matching. The standard entry-level Y-DNA test is a 37-marker Y-STR test. Next generation Y-chromosome sequencing tests report both SNPs and STRs but are too expensive for the average consumer. Some of the microarrays include Y-SNPs but there are not enough to be genealogically informative. Unfortunately the recent article you’re referring to conflated Y-STRs with autosomal SNPs.

    23andMe used to provide customers with PCA plots. The now defunct BritainsDNA also used to provide PCA plots along with Admixture plots. I’m surprised more companies don’t offer these alternative visualisations rather than just giving the bald percentages.

  2. good point re: STRs, though that subtly often gets lost. re: PCA, lay people get surprisingly confused by them. i’m 99% 23andMe got rid of them due to AB-testing. they’re computationally less intensive than model-based clustering….

  3. I think people have unrealistic expectations of what the results will provide. Americans can be seeking definitive answers on the countries of origin on all their ancestors, and these tests obviously can’t achieve that. Because of the ambiguity of the results, the consumer has to become an interpreter of the interpretations. You look at your first admixture table, and you don’t conclude that you had a 5th great grandparent that was Yoruba due to that 1%…anymore than I conclude that my wife’s European grandmother has recent Middle Eastern ancestors despite the 2% of her genome Family Tree DNA labeled as such.

    But there are other cases where even a somewhat informed person is left with more questions than answers. 23andme assigns me 0.3% Sub-Saharan African ancestry. Is this a statistical artifact or something meaningful? At first I dismissed it due to the tiny value and fact that it only got assigned in 23andme’s speculative mode. But it now gets labeled as such in their conservative mode (presumably due to tweaks in their algorithms). Combined with your September blog on the subject, now I’m not so sure what to make of it. I’m similarly perplexed by the site’s claim that I have ancestors from Iberia and Scandinavia born in the 1700s when all my genealogical research suggests I’m a WASP through-and-through. If true, it’s tantalizing to ponder the story behind these ancestors. But is it real?

    I lack the skills to further evaluate these interpretations on my own besides running analyses on gedmatch. Though I tried once, I was unable to get ADMIXTURE to run. But at least I have a vague sense of how these results are calculated and what might be done to gain further insight. For most consumers and journalists, I imagine that their lack of understanding can make all of this feel like astrology.

  4. 23andme assigns me 0.3% Sub-Saharan African ancestry. Is this a statistical artifact or something meaningful?

    23andMe has huge data sets. they know their internal patterns about how consistently “0.3%” shows up in people of known pure british provenance.

    Though I tried once, I was unable to get ADMIXTURE to run.

    might do a tutorial again.

  5. I used to do molecular phylogenetics. I want to know how to get the PCA plots … and what sorts of other fun games I can play with my DNA.

    Mind you, I have no experience with Whole Genome Analysis (yet), but I think aligining sequence data is fun!

  6. The only two firms still basing their Y testing around STRs were both founded in 2000 so I think Razib got it right. STRs simply are not accurate compared with SNPs and can have unacceptably high false positive and indeed false negative rates. The industry has abandoned them.

  7. I spent some time this week talking with a relative about what made her reluctant to get DNA testing and made another family (where the previous generation had an apparently hereditary medical condition) reluctant to get DNA testing.

    The dominant concern was that the prohibition on pre-existing condition discrimination in insurance rates under the ACA would be repealed, and that even if their results were “private” that an insurance company might someday either require you to state if you know if you have any indications of disease risk in any DNA tests you have taken (and if so to indicate) or even require you to submit the results file if you have one for them to analyze. Thus, willful blindness was a preferred option and the knowledge of ancestry, while entertaining, was something they all had well established by other means.

    The outlook might have been different if DNA testing could have enabled a pro-active preventative response to the conditions that they feared they might have based upon family histories of medical conditions, but most of the concerning conditions were ones with no cure and no real prevention options (e.g. early onset Alzheimers and a rare genetic metabolic condition with onset typically in middle age).

  8. Razib,
    You are so fun and accessible to read. Just whiled away 2 hours on your old Discovery pieces and now this article.

    The big 3 commercial DNA testing companies are a jumping off point to real research.
    I understand why they feel the need to dumb things down so much.
    I ordered My Heritage because they were the cheapest means to get my raw data to download.
    Their ancestry results made me snicker since I have been following my family’s genealogy since I was 11. Nearly 2 decades.
    If I was only privy to the last 200 years of my lineage, I would only think I was Welsh, Scottish and smattering of Irish.
    But since the research at my disposal goes back 300 years on, I know I also have Dutch (Abrams who converted), Native American, Iberian, etc.
    So 31% of the my information was missing from My Heritage results.

    I’ve gone through all the calculators at Gedmatch, taken a whirl with Stanford’s Genotation site and got an analysis from Dr. Doug Mcdonald.

    They all perfectly coalesced with my paper research. If I had just taken My Heritage as gospel, I would have been left confused and disappointed.

Comments are closed.