Search speed comparison: naive exact vs. boyer-moore vs. k-mer index

Recently, I’ve been working my way through Ben Langmead’s excellent introduction to “Algorithms for DNA sequencing” on Coursera.com.    The class is a fascinating and well-taught intro to concepts about DNA short read alignment and assembly methods.

As part of the course, we have implement or modify python code relating to several simple matching algorithms, including the “naive exact” (NEM) matching method, the “boyer-moore” (BM) method, and a k-mer index approach.

I was curious about speed, so I made a figure showing the computational time that each approach takes.  P and T refer to the length of the short read to be aligned and the genome to align to, respectively.

Figure 1. Comparing the speed of the NEM, BM, and K-mer search methods on long and short patterns (P) and texts (T). The y-axis is on a log-scale.

Note that the y-axis is a log scale in units of microseconds.  Right away, it is obvious that k-mer index methods are orders of magnitude faster than ‘online’ methods like NEM and BM.

Also of interest is the fact that as the pattern gets shorter, the advantage of BM preprocessing of the pattern gets smaller.  You can see that going from 30 to 11 pattern length negates any advantage to BM searching.

 

Variant annotation and transcript choice

 

Transcript choice between methods

Variant annotation methods do not all behave the same way when choosing transcripts to annotate against.  This leads to differing outcomes in annotations which may arise from different logic structures in the algorithms or different user criteria for annotation.

Unfortunately, incorrect annotations or disagreement in annotation outcomes can lead investigators to waste resources tracking down variants of little interest or to miss severe variants of potential clinical significance.

In this first post in a series, I’ll talk briefly about differing outcomes owing to transcript choices when three popular methods (ANNOVAR, VEP, and SNPEff) are applied to a dataset of 81 million variants from the 1000Genomes project.

In this figure you can see the lack of concordance owing to transcript choice affects a surprisingly large number of variants.

variant annotation

This disagreement is largely owing to the way that intergenic variants are handled, assigning them to nearest genes or arbitrary categories like “unknown.”

To learn more about this problem and other issues with annotators and concordance between methods, check out our recent paper at biorxiv.   In part two, I’ll talk more about concordance between methods when annotators agree on transcript choice.

Sequencing depth for accurate SNP calling: bcbio case study

Intuitively, it is easy to grasp that the more sequencing depth (i.e., the greater the number of reads covering any given position in the genome) the more accurate the calling of SNPs and indels (insertions/deletions).   But how much difference does this actually make in the real world?  Is 20X coverage dramatically worse than 30X (considered a standard coverage depth on genomes)?

To find out, I conducted an experiment with the bcbio pipeline, a bioinformatics pipeline solution built in python that allows for automated and reproducible analyses on high-performance computing clusters.  One feature of bcbio is that it can perform validation surveys using high-confidence consensus calls from reference genomes like the NA12878 Coriell sample (from the Genome in a Bottle project).
For NA12878, researchers collated consensus SNP and indel calls from a large variety of sequencing technologies and calling methods to produce a very high-confidence callset for training other methods or validating a sequencing workflow.  bcbio includes these variant calls and can easily be setup to validate these calls against a sequenced NA12878 genome.

The sequencing depth experiment

I started with a NA12878 genome sequenced to 30X sequencing depth.  To compare shallower depths, I subsampled the data to generate 20X, 10X, etc…  [Please note: data was not subsetted randomly, rather “slices” were taken from the 30X dataset] To look at a 60X coverage datapoint, I combined data from two sequencing runs on both flow cells of a HiSeq4000 instrument.

The results after validation are shown in Figure 1 (depth of coverage is along the x-axis):

sequencing depth
Fig 1. SNP discovery as a function of increasing coverage of the GIAB validation sample.

 

The figure shows that, as expected, when sequencing depth decreases  the error rate increases, and SNP discovery declines.   It also makes the case for the commonly held view that 30X is enough coverage for genomes, since going to 60X leads to almost unnoticeable improvement in the % found and a slight increase in error.  Performance really degrades at 12X and below, with poor discovery rates and unacceptably high error rates.

I will be submitting a short manuscript to biorxiv.org soon describing this work in more detail.

Should you trim your RNA-Seq reads?

According to a new paper, basically, no.   Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes.   Furthermore, the authors claim that:

“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”

First, the authors show how aggressive trimming affects mappability in Figure 2:

Rna-Seq reads trimming effects.
Influence of quality-based trimming on mappability of reads.

You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases.  Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:

Rna-Seq reads.
Isoform and gene expression levels after trimming.

Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots).  Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.

One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads.  In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:

Isoform and gene expression levels after length-filtering.
Isoform and gene expression levels after length-filtering.

Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down.   There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.

Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming.  A secondary length filter step can mitigate some of the damage.   In the end, the use of trimming depends on your project type and goals.  If you have tons of reads, some modest trimming and length filtering may not be too destructive.  Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads.  However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.

Filtering variants for cancer mutational signature analysis

Recently, I’ve been working to help prepare a manuscript on Vestibular Schwannomas (VS), a type of benign cancer of the myelin-forming cells along the nerves of the ear.  I’ve been thinking a lot about strategies for filtering exome variant calls to feed into mutational signature analysis.

Mutational signatures are important because they describe the types of mutational processes operating on the genome of the tumor cells.  Many of these processes are known (see the COSMIC database), however, some are entirely novel.  The variants that are used for calculating such signatures are somatic in nature, and have to be carefully curated from the raw variant calls that you get from a pipeline like GATK.

Looking at the existing literature, I find that there is no common or “best practices” methodology for filtering variants in whole exome data.  Some groups are very stringent, others less so.  The first step in most cases is to just subtract normal variant calls from tumor in most cases.  However, there are further filtering steps that should be undertaken.

If I had to describe some overall commonalities in the literature approaches to somatic variant filters, it could include:

1) removing variants that are present in dbSNP or 1000genomes or other non-cancer exome data
2) taking only variants in coding regions (exons) or splicing sites
3) variants must appear in more than X reads in the tumor, and fewer than X reads in the normal (generally ~5 and ~2, respectively)
4) subtraction of “normals” from “tumor” (either pooled normals, or paired)
5) variant position must be covered by a minimum depth (usually > 10X)
6) throwing away reads from low mapping quality (MQ) regions

Some papers only consider non-synonymous variants, but for mutational signatures, to me it makes sense to take all validated variants (especially in exome data because you are starting with fewer raw variant calls than whole genome data).

As far as actual numbers of variants that are “fed” into the mutational signature analysis, most papers do not report this directly (surprisingly).  If you dig around in the SI sections, sometimes you can find it indirectly.

It looks like, generally, the number of variants is somewhere around 10,000 for papers dealing with specific tumor types (not pan-cancer analyses of public databases). Several papers end up with ~1000 variants per tumor (ranging from 1,000 up to 7,000).  So with 10 tumors sequenced, that would be 10,000 filtered, high-confidence SNVs.

If you’re working on exome mutational signature analysis and you have your own filtering criteria, I’d love for you to share it in the comments.