A Unix one-liner to scrape GI numbers from a SAM file

I recently had a situation where I needed to scrape out all of the GI numbers from a SAM alignment file.  My first instinct was to turn to python to accomplish this, but I forced myself to find a command line tool or set of tools to quickly do this task as a one-liner.  First, here is the format of the first two lines of the file:

…..

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|599154892|gb|EYE94125.1| 1066 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:88 NM:i:0 ZL:i:2534 ZR:i:217 ZE:f:3.9e-15 ZI:i:100 ZF:i:-2 ZS:i:125 MD:Z:41

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|115387347|ref|XP_001211179.1| 1065 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:77 NM:i:8 ZL:i:1670 ZR:i:188 ZE:f:8.9e-12 ZI:i:80 ZF:i:-2 ZS:i:125 MD:Z:1Y3V3V11F11SSY3A1

….

You can see that what I want is the information between the pipes in the field “gi|#########|” .   Here is how I solved this with a bash script:


#!/bin/bash
for file in *.sam
do
echo ${file}
cat ${file} | egrep -o  "\|\S*\|(\S*)\|" | sed 's/|/,/g' | cut -f 2 -d ',' > ${file}.out
done

To unpack this briefly, the “cat” command outputs each line of the file to stdout, which is redirected to “egrep.”   Egrep looks for the regular expression “\|\S*\|(\S*)\|”.   This expression searches for a pipe, followed by any number of characters, with another pipe, then more characters, then another pipe.  The pipes are escaped with a backslash “\”.

The next step is to pipe to “sed”, which takes the incoming stream and replaces the pipes with commas.  This output is sent to “cut”, which uses the commas as delimiters, and takes the second field.

There are probably shorter ways to do this (cutting on pipes, for example), but already attempting this at the command line saved me a lot of time over coding this in python.

Sequencing depth for accurate SNP calling: bcbio case study

Intuitively, it is easy to grasp that the more sequencing depth (i.e., the greater the number of reads covering any given position in the genome) the more accurate the calling of SNPs and indels (insertions/deletions).   But how much difference does this actually make in the real world?  Is 20X coverage dramatically worse than 30X (considered a standard coverage depth on genomes)?

To find out, I conducted an experiment with the bcbio pipeline, a bioinformatics pipeline solution built in python that allows for automated and reproducible analyses on high-performance computing clusters.  One feature of bcbio is that it can perform validation surveys using high-confidence consensus calls from reference genomes like the NA12878 Coriell sample (from the Genome in a Bottle project).
For NA12878, researchers collated consensus SNP and indel calls from a large variety of sequencing technologies and calling methods to produce a very high-confidence callset for training other methods or validating a sequencing workflow.  bcbio includes these variant calls and can easily be setup to validate these calls against a sequenced NA12878 genome.

The sequencing depth experiment

I started with a NA12878 genome sequenced to 30X sequencing depth.  To compare shallower depths, I subsampled the data to generate 20X, 10X, etc…  [Please note: data was not subsetted randomly, rather “slices” were taken from the 30X dataset] To look at a 60X coverage datapoint, I combined data from two sequencing runs on both flow cells of a HiSeq4000 instrument.

The results after validation are shown in Figure 1 (depth of coverage is along the x-axis):

sequencing depth
Fig 1. SNP discovery as a function of increasing coverage of the GIAB validation sample.

 

The figure shows that, as expected, when sequencing depth decreases  the error rate increases, and SNP discovery declines.   It also makes the case for the commonly held view that 30X is enough coverage for genomes, since going to 60X leads to almost unnoticeable improvement in the % found and a slight increase in error.  Performance really degrades at 12X and below, with poor discovery rates and unacceptably high error rates.

I will be submitting a short manuscript to biorxiv.org soon describing this work in more detail.

Why you should think twice before rarefying your 16S data

According to a recent paper, the common practice of rarefying (randomly subsampling your 16S reads), is statistically incorrect and should be abandoned in favor of more sophisticated ‘mixture-model’ approaches favored by RNA-Seq analysis software.

The authors give a basic thought experiment to help clarify their reasoning.   I’ve copied the figure below.  It basically shows what happens when one rarifies library “B” from 1000 reads to 100 reads in order to match library “A.”  This is a standard procedure, even in the QIIME workflow.  By dividing the library size by 10, the variance of the data in B goes up by 10, and thus the statistical power to differentiate between OTU1 and OTU2’s abundances in samples A and B is lost.

Fig 1. An example of the effect of rarefying on statistical power.
Fig 1. An example of the effect of rarefying on statistical power.

As the authors point out in the paper:

“Although rarefying does equalize variances, it does so only by inflating the variances in all samples to the largest (worst) value among them at the cost of discriminating power (increased uncertainty). “

The solution to the rarefying problem is to take advantage of methods from the RNA-Seq world in order to determine differential taxonomic abundances, just as differential transcript abundances are calculated by the same methods.   The R package, phyloseq, provides an R container for taxonomic datasets from Qiime (in the BIOM format).  It also provides a convenient set of extensions that allow analysis of this data with RNA-Seq methods like edgeR and DESeq2.

Should you trim your RNA-Seq reads?

According to a new paper, basically, no.   Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes.   Furthermore, the authors claim that:

“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”

First, the authors show how aggressive trimming affects mappability in Figure 2:

Rna-Seq reads trimming effects.
Influence of quality-based trimming on mappability of reads.

You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases.  Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:

Rna-Seq reads.
Isoform and gene expression levels after trimming.

Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots).  Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.

One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads.  In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:

Isoform and gene expression levels after length-filtering.
Isoform and gene expression levels after length-filtering.

Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down.   There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.

Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming.  A secondary length filter step can mitigate some of the damage.   In the end, the use of trimming depends on your project type and goals.  If you have tons of reads, some modest trimming and length filtering may not be too destructive.  Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads.  However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.

Filtering variants for cancer mutational signature analysis

Recently, I’ve been working to help prepare a manuscript on Vestibular Schwannomas (VS), a type of benign cancer of the myelin-forming cells along the nerves of the ear.  I’ve been thinking a lot about strategies for filtering exome variant calls to feed into mutational signature analysis.

Mutational signatures are important because they describe the types of mutational processes operating on the genome of the tumor cells.  Many of these processes are known (see the COSMIC database), however, some are entirely novel.  The variants that are used for calculating such signatures are somatic in nature, and have to be carefully curated from the raw variant calls that you get from a pipeline like GATK.

Looking at the existing literature, I find that there is no common or “best practices” methodology for filtering variants in whole exome data.  Some groups are very stringent, others less so.  The first step in most cases is to just subtract normal variant calls from tumor in most cases.  However, there are further filtering steps that should be undertaken.

If I had to describe some overall commonalities in the literature approaches to somatic variant filters, it could include:

1) removing variants that are present in dbSNP or 1000genomes or other non-cancer exome data
2) taking only variants in coding regions (exons) or splicing sites
3) variants must appear in more than X reads in the tumor, and fewer than X reads in the normal (generally ~5 and ~2, respectively)
4) subtraction of “normals” from “tumor” (either pooled normals, or paired)
5) variant position must be covered by a minimum depth (usually > 10X)
6) throwing away reads from low mapping quality (MQ) regions

Some papers only consider non-synonymous variants, but for mutational signatures, to me it makes sense to take all validated variants (especially in exome data because you are starting with fewer raw variant calls than whole genome data).

As far as actual numbers of variants that are “fed” into the mutational signature analysis, most papers do not report this directly (surprisingly).  If you dig around in the SI sections, sometimes you can find it indirectly.

It looks like, generally, the number of variants is somewhere around 10,000 for papers dealing with specific tumor types (not pan-cancer analyses of public databases). Several papers end up with ~1000 variants per tumor (ranging from 1,000 up to 7,000).  So with 10 tumors sequenced, that would be 10,000 filtered, high-confidence SNVs.

If you’re working on exome mutational signature analysis and you have your own filtering criteria, I’d love for you to share it in the comments.

Beware of biological variability in your *-Seq experiments

From this excellent paper on biological variability in RNA-Seq experiments (bold highlights are mine):

“Biological variability has important implications for the design, analysis and interpretation of RNA-sequencing experiments. […] If only a few biological replicates are available, it will be impossible to estimate the level of biological variability in expression for each gene in a study. Supplementary Table 1 summarizes a large number of published RNA-sequencing studies over the past three years. In every case, except for the two studies we analyzed here, conclusions were based on a small number (n ≤ 2) of biological replicates. One goal of RNA-sequencing studies may be simply to identify and catalog expression of new or alternative transcripts. However, all of these studies make broader biological statements on the basis of a very small set of biological replicates.

Our analysis has two important implications for studies performed with a small number of biological replicates. First, significant results in these studies may be due to biological variation and may not be reproducible; and second, it is impossible to know whether expression patterns are specific to the individuals in the study or are a characteristic of the study populations. These ideas are now widely accepted for DNA microarray experiments, where a large number of biological replicates are now required to justify scientific conclusions. Our analysis suggests that as biological variability is a fundamental characteristic of gene expression, sequencing experiments should be subject to similar requirements.”

If you are doing RNA-Seq, be very vigilant in your experimental design and find a way to incorporate more replicates, even at the expense of testing fewer comparisons.   It’s better to test one comparison (tissue X vs. Y, for example) with 5 or more replicates than to test three comparisons (Tissue X vs. Y, Y vs. Z, and X vx Z) with only 2 replicates for each tissue type.

 

Hands-on with cancer mutational signatures, part 2

In this second part of the “Hands On” series, I want to address how to create the input for the MatLab mutational signature framework from the output of my python code to prepare the SNP data for analysis.

First, creating a Matlab .mat file for input to the program.   The code is expecting an input file that contains a set of mutational catalogues and metadata information about the cancer type and the mutational types and subtypes represented in the data.

Fig 1. The required data types within one .mat file to run the framework.
Fig 1. The required data types within one .mat file to run the framework.

As you can see from Fig 1., you need to provide a 96 by X matrix, where X is the number of samples in your mutational catalogue.  You also need an X by 1 cell array specifying sample names, a 96 by 1 cell array specifying the subtypes (ACA, ACC, ACG, etc…) and a 96 by 1 cell array specifying the types (C>A, C>A, C>A, etc…).  These must correspond to the same order as specified in the “originalGenomes” matrix or the results won’t make sense.

My code outputs .csv files for all of these needed inputs.  For example, when you run my python code on your SNP list, you will get a “subtypes.csv”, “types.csv”, “samples.csv”, and “samples_by_counts.csv” matrix (i.e., originalGenomes) corresponding to the above cell arrays in Fig 1.

Now, the challenge is to get those CSV files into MatLab.  You should have downloaded and installed MatLab on your PC.  Open MatLab and select “Import Data.”

Fig 2. Select the "Import Data" button.
Fig 2. Select the “Import Data” button.

Browse to one of the output CSV files and select it.  It will open in a new window like in Fig 3 below:

Fig 3. The data import window from MatLab.
Fig 3. The data import window from MatLab.

Be sure to select the correct data type in the “imported data” section.  Also, select only row 1 for import (row 2 is empty).  Once you’re finished, click Import Selection.  It will create a 1×96 cell called “types.”  It looks like this:

Fig 4. The new imported cell data "types."
Fig 4. The new imported cell data “types.”

We’re almost done, but we have to switch the cell to be 96×1 rather than 1×96.  To do this, just double-click it and select “transpose” in the variable editor.   Now you should repeat this process for the other CSV input files, being sure to select “matrix” as the data type for the “samples_by_counts” file.   Pay special attention to make sure the dimensions and data types are correct.

Once you have everything in place you should be ready do run the mutational analysis framework from the paper.   To do this, open the “example2.m” Matlab script included with the download.  In the “Define parameters” section, change the file paths to the .mat file you just created:

Fig 5. Define your parameters for the signature analysis.
Fig 5. Define your parameters for the signature analysis.

 

Here you can see in Fig 5, I’ve specified 100 iterations per core, a number of possible signatures from 1-8, and the relative path to the input and output files.  The authors say that ~1000 iterations is necessary for accuracy, but I’ve found little difference in the predictions between 50-500 iterations.   I would perform as many iterations as possible, given constraints from time and computational power.

Note also that you may need to change the part of the code that deals with setting up a parallel computing pool.  Since MatLab 2014, I believe, the “matlabpool” processing command has been deprecated.   Substituting the “parpool” command seems to work fine for me (Mac OS X 10.10, 8 core Macbook Pro Retina) as follows:

if ( parpool('local') == 0 )

parpool open; % opens the default matlabpool, if it is not already opened

end

This post is getting lengthy, so I will stop here and post one more part later about how to compare the signatures you calculate with the COSMIC database using the cosine similarity metric.