I don’t talk too much about genomic technology because it changes so fast. Being up-to-date on the latest machines and tools often requires regular deep-dives right now, though I believe at some point technological improvements will plateau as the data returned will be cheap and high quality enough that there won’t be much to gain on the margin.
Of course we’ve already come a long way. Fifteen years ago a “whole human genome” cost on the order of billions of dollars. Today a high quality whole human genome will run you on the order of $1,000. This is fundamentally a technology driven change, with big metal machines automatically generating reads and powerful computers to process them. One couldn’t imagine such a scenario 30 years ago because the technology wasn’t there.
I’ve stated before that I don’t think genomics fundamentally alters what we know and understand about evolution. At least so far. But it is a huge change in the domain of medicine. Cleary the human genomicists, especially Francis Collins, overhyped the yield of the technology in relation to healthcare in the 2000s. But with cheap and ubiquitous sequencing we may see the end of Mendelian diseases in our lifetime (through screening and possibly at some point CRISPR therapy).
This has been driven by technological innovation in the private sector around a few firms. The famous chart showing the massive decline in the cost of genomic sequencing over the past 15 years is due in large part to the successes of Illumina. But, Illumina has also had a quasi-monopoly on the field over the past five years (or more), and that shows with the leveling off of the decline in cost. Until the past year….
What gives? Many people believe that Illumina is moving again in part because a genuine challenger is emerging, or at least the flicker of a challenge, in the form of Oxford Nanopore. Oxford Nanopore has been around since 2005, but it really came into the public eye around 2010 or so. But like many tech companies it overpromised in the early years. I remember skeptically listening to a friend in the fall of 2011 talk about how quickly Nanopore was going to change the game…. I didn’t put too much stock into these sorts of presentations to hopeful researchers because I remember Pacific Biosciences making the same sort of pitch to amazed biologists in 2008. Pac Bio is still around, but has turned out to be a bit player, rather than a challenger to Illumina.
But I have to admit that Nanopore has really started to step up its game of late. Probably one of the major things it has accomplished is that it’s made us reimagine what sequencing technology should look like. Rather than refrigerators of various sizes, Oxford Nanopore allows us to imagine sequencing technology which exhibits a form factor more analogous to a USB thumb drive. The first time I saw a Nanopore machine in the flesh I knew intellectually what I was going to see…but because of my deep intuitions I still overlooked the two Nanopore machines laying on the workbench in front of me.
Despite their amazing form factor, these early Nanopore machines had limited application. They didn’t generate much data, and so were utilized by researchers who worked with smaller genomes. Scientists who worked with bacteria seem to have been using them a lot, for example. Additionally the machines were error prone and people were working out their kinks in real time in laboratories (one tech told me early on they were so small that he swore they were affected by ambient vibrations so he found ways to dampen that source of error).
A new preprint suggests we may be turning the corner though, Nanopore sequencing and assembly of a human genome with ultra-long reads:
Nanopore sequencing is a promising technique for genome sequencing due to its portability, ability to sequence long reads from single molecules, and to simultaneously assay DNA methylation. However until recently nanopore sequencing has been mainly applied to small genomes, due to the limited output attainable. We present nanopore sequencing and assembly of the GM12878 Utah/Ceph human reference genome generated using the Oxford Nanopore MinION and R9.4 version chemistry. We generated 91.2 Gb of sequence data (~30x theoretical coverage) from 39 flowcells. De novo assembly yielded a highly complete and contiguous assembly (NG50 ~3Mb). We observed considerable variability in homopolymeric tract resolution between different basecallers. The data permitted sensitive detection of both large structural variants and epigenetic modifications. Further we developed a new approach exploiting the long-read capability of this system and found that adding an additional 5x-coverage of “ultra-long” reads (read N50 of 99.7kb) more than doubled the assembly contiguity. Modelling the repeat structure of the human genome predicts extraordinarily contiguous assemblies may be possible using nanopore reads alone. Portable de novo sequencing of human genomes may be important for rapid point-of-care diagnosis of rare genetic diseases and cancer, and monitoring of cancer progression. The complete dataset including raw signal is available as an Amazon Web Services Open Dataset at: https://github.com/nanopore-wgs-consortium/NA12878.
30x just means that you’re getting bases sampled typically 30 times, so that you have a very accurate and precise read on its state. 30x has become the default standard in medical genomics. If Nanopore can do 30x on human genomes at reasonable cost it won’t be a niche player much longer.
The read length is important because last I checked the human genome still had large holes in it. The typical Illumina machine produces average read lengths in the low hundreds of base pairs. If you have large repetitive regions of the human genome (and you do have these), you’re never going to span them with such short yardsticks. Additionally, these short reads have to be tiled together when you assemble a genome from raw results, and this is a computationally really intensive task. It’s good when you have a reference genome you can align to as a scaffold. But researchers who don’t work on humans or model organisms may not have a good reference genome, or in many cases a reference genome at all.
Pac Bio occupies a space where it provide really long reads for a high price point. Most of the time this isn’t necessary, but imagine you work on a disease which is caused by large repetitive regions. You are likely willing to pay the price that is asked. And because Pac Bio generates very long reads it makes de novo assembly much easier, as your algorithm has to tile together far fewer contiguous sequences, and long sequences are less likely to have lots of repetitive matches in the genome.
But Pac Bio machines are expensive and huge. In the abstract above it alludes to “Portable de novo sequencing of human genomes.” This is a huge deal. The dream, as whispered by some genomicists I have known, is that at a point in the future biologists would carry portable sequencers which would produce very long reads that so that they could de novo assemble sequences on the spot. A concrete example might be a health inspector checking on the sorts of microbes found on the counter of a restaurant, or a field ecologist who might be sample various fungi to discover cryptic species.
Obviously this is still a dream. The preprint above makes it clear that to do what they did required a lot of novel techniques and development of new tools. This isn’t beta technology, it’s early alpha. But because it’s 2017 the outlines of the dream are coming into public view.
Citation: Nanopore sequencing and assembly of a human genome with ultra-long reads
Miten Jain, Sergey Koren, Josh Quick, Arthur C Rand, Thomas A Sasani, John R Tyson, Andrew D Beggs, Alexander T Dilthey, Ian T Fiddes, Sunir Malla, Hannah Marriott, Karen H Miga, Tom Nieto, Justin O’Grady, Hugh E Olsen, Brent S Pedersen, Arang Rhie, Hollian Richardson, Aaron Quinlan, Terrance P Snutch, Louise Tee, Benedict Paten, Adam M. Phillippy, Jared T Simpson, Nicholas James Loman, Matthew Loose
bioRxiv 128835; doi: https://doi.org/10.1101/128835