Recently, I’ve been working to help prepare a manuscript on Vestibular Schwannomas (VS), a type of benign cancer of the myelin-forming cells along the nerves of the ear. I’ve been thinking a lot about strategies for filtering exome variant calls to feed into mutational signature analysis.
Mutational signatures are important because they describe the types of mutational processes operating on the genome of the tumor cells. Many of these processes are known (see the COSMIC database), however, some are entirely novel. The variants that are used for calculating such signatures are somatic in nature, and have to be carefully curated from the raw variant calls that you get from a pipeline like GATK.
Looking at the existing literature, I find that there is no common or “best practices” methodology for filtering variants in whole exome data. Some groups are very stringent, others less so. The first step in most cases is to just subtract normal variant calls from tumor in most cases. However, there are further filtering steps that should be undertaken.
If I had to describe some overall commonalities in the literature approaches to somatic variant filters, it could include:
1) removing variants that are present in dbSNP or 1000genomes or other non-cancer exome data
2) taking only variants in coding regions (exons) or splicing sites
3) variants must appear in more than X reads in the tumor, and fewer than X reads in the normal (generally ~5 and ~2, respectively)
4) subtraction of “normals” from “tumor” (either pooled normals, or paired)
5) variant position must be covered by a minimum depth (usually > 10X)
6) throwing away reads from low mapping quality (MQ) regions
Some papers only consider non-synonymous variants, but for mutational signatures, to me it makes sense to take all validated variants (especially in exome data because you are starting with fewer raw variant calls than whole genome data).
As far as actual numbers of variants that are “fed” into the mutational signature analysis, most papers do not report this directly (surprisingly). If you dig around in the SI sections, sometimes you can find it indirectly.
It looks like, generally, the number of variants is somewhere around 10,000 for papers dealing with specific tumor types (not pan-cancer analyses of public databases). Several papers end up with ~1000 variants per tumor (ranging from 1,000 up to 7,000). So with 10 tumors sequenced, that would be 10,000 filtered, high-confidence SNVs.
If you’re working on exome mutational signature analysis and you have your own filtering criteria, I’d love for you to share it in the comments.
In this second part of the “Hands On” series, I want to address how to create the input for the MatLab mutational signature framework from the output of my python code to prepare the SNP data for analysis.
First, creating a Matlab .mat file for input to the program. The code is expecting an input file that contains a set of mutational catalogues and metadata information about the cancer type and the mutational types and subtypes represented in the data.
As you can see from Fig 1., you need to provide a 96 by X matrix, where X is the number of samples in your mutational catalogue. You also need an X by 1 cell array specifying sample names, a 96 by 1 cell array specifying the subtypes (ACA, ACC, ACG, etc…) and a 96 by 1 cell array specifying the types (C>A, C>A, C>A, etc…). These must correspond to the same order as specified in the “originalGenomes” matrix or the results won’t make sense.
My code outputs .csv files for all of these needed inputs. For example, when you run my python code on your SNP list, you will get a “subtypes.csv”, “types.csv”, “samples.csv”, and “samples_by_counts.csv” matrix (i.e., originalGenomes) corresponding to the above cell arrays in Fig 1.
Now, the challenge is to get those CSV files into MatLab. You should have downloaded and installed MatLab on your PC. Open MatLab and select “Import Data.”
Browse to one of the output CSV files and select it. It will open in a new window like in Fig 3 below:
Be sure to select the correct data type in the “imported data” section. Also, select only row 1 for import (row 2 is empty). Once you’re finished, click Import Selection. It will create a 1×96 cell called “types.” It looks like this:
We’re almost done, but we have to switch the cell to be 96×1 rather than 1×96. To do this, just double-click it and select “transpose” in the variable editor. Now you should repeat this process for the other CSV input files, being sure to select “matrix” as the data type for the “samples_by_counts” file. Pay special attention to make sure the dimensions and data types are correct.
Once you have everything in place you should be ready do run the mutational analysis framework from the paper. To do this, open the “example2.m” Matlab script included with the download. In the “Define parameters” section, change the file paths to the .mat file you just created:
Here you can see in Fig 5, I’ve specified 100 iterations per core, a number of possible signatures from 1-8, and the relative path to the input and output files. The authors say that ~1000 iterations is necessary for accuracy, but I’ve found little difference in the predictions between 50-500 iterations. I would perform as many iterations as possible, given constraints from time and computational power.
Note also that you may need to change the part of the code that deals with setting up a parallel computing pool. Since MatLab 2014, I believe, the “matlabpool” processing command has been deprecated. Substituting the “parpool” command seems to work fine for me (Mac OS X 10.10, 8 core Macbook Pro Retina) as follows:
if ( parpool('local') == 0 )
parpool open; % opens the default matlabpool, if it is not already opened
This post is getting lengthy, so I will stop here and post one more part later about how to compare the signatures you calculate with the COSMIC database using the cosine similarity metric.
In my last post, I wrote about the biological context of mutational signatures in cancer and how a recent series of papers has addressed this notion by creating an algorithm for extracting signatures from a catalogue of tumor SNPs.
In this follow-up post, I wanted to offer practical advice, based on my experience, about how to prepare data for mutational signature analysis and how to interpret the results.
First, in order to analyze your SNPs for mutational signatures, one needs to derive the trimer context of each SNP (i.e., the upstream base and downstream base flanking the SNP) for technical reasons described in the paper linked above.
In order to do this, a reference genome must be queried with the positions of the SNPs in order to find those bases adjacent to them. One could either query a remote database, like Entrez Nucleotide, or download an entire 40GB reference genome and search it locally.
In my code, I opted for the former option: querying the NCBI/Entrez Nucleotide database using HTTP requests. The advantage of this approach is that I can reuse the same code to query multiple reference genomes (e.g., hg38 vs. hg19), depending on the changing needs of future projects.
The relevant section of code is as follows:
def get_record(chrom, pos):
'''Fetch record, and minus/plus one base from Entrez'''
handle = Entrez.efetch(db="nucleotide",
seq_start=int(pos) - 1,
seq_stop=int(pos) + 1)
record = SeqIO.read(handle, "fasta")
You can see from the code that I am using a dictionary (‘hg19_chrom’) to translate between chromosome numbers and their Entrez Nucleotide IDs in the eFetch request.
The disadvantage of this approach is that Entrez HTTP tools limits the user to 3 queries per second (in fact this limitation is hard-coded into Biopython). Even with my mediocre coding skills, this turns out to be the rate-limiting step. Thus, this code would have to be run pretty much overnight for any sizable number of SNPs (~50,000 SNPs would take ~4.6 hrs). However, it’s easy to let the script run overnight, so this wasn’t a deal breaker for me.
In the third and final post next two posts on this topic I will address how to create the MatLab .mat file from the output of this script and how to compare the signatures generated by the MatLab framework to the COSMIC reference database in a non-biased way.
A recent collaboration with a clinician here at UI hospital and clinics introduced me to the idea of mutational signatures in cancer. Characterizing mutational signatures is made possible by the falling cost and increasing accuracy of whole-genome sequencing methods. Tumors are sequenced across the entire genome and the catalog of somatic mutations (i.e, SNPs) is used to compute the mutational signatures of a tumor’s genome.
The idea is that the collection of somatic mutations found in a tumor are the result of a variety of defective DNA-repair or DNA-replication machinery combined with the action of known or unknown mutagens and environmental exposures. The processes operate over time and leave a “footprint” in the tumor DNA that can be examined. These sum of all of the mutational processes operating within a tumor cell is a distinct mutational “signature” that differs by tumor types.
For example, in lung cancer, the bulk of somatic mutations are C>A transversions resulting from chronic exposure to tobacco smoke. In melanoma, the predominant mutation type is C>T and CC>TT at dipyrimidines, a mutation type associated with UV-light exposure. And in colorectal cancer, defective DNA mismatch repair contributes the majority of the mutations.
A recent paper in Nature has formalized this notion of mutational signatures in tumors and provided a mathematical framework (written in MatLab) for assessing how many and which signatures are operational within an uncharacterized tumor type (generally there between 2 and 6 processes).
In the paper, the authors analyzed almost 5 million somatic cancer SNPs and identified 21 unique signatures of mutational processes through a mathematical process of deconvolution, followed by experimental validation. A curated catalog of the most current signatures based on available sequence data can be found at the COSMIC database.
In part 2 of this post, I’ll go into more detail on the mutational signatures and link to some python code I’ve written to help get flat-file lists of SNPs into the correct form for easy input into the MatLab framework.
Single splice-altering variants can alter mRNA structure and cause disease
The splicing of introns and joining of exons to form mRNA is dependent on complex cellular machinery and conserved sequences within introns to be performed correctly. Single-nucleotide variants in splicing consensus regions, or “scSNVs” (defined as −3 to +8 at the 5’ splice site and −12 to +2 at the 3’ splice site) have the potential to alter the normal pattern of mRNA splicing in deleterious ways. Even those variants that are exonic and synonymous (i.e., they do not alter the amino acid incorporated into a polypeptide) can potentially affect splicing. Altered splicing can have important downstream effects in human disease such as cancer.
Using machine-learning to predict splice-altering variants
They did this by using “random forest” (rf) and “adaptive-boosting” (adaboost) classifiers from machine-learning methods to give improved ensemble predictions that are demonstrated to do better than predictions from an individual tool, leading to improvements in the sensitivity and specificity of the predictions.
As part of their supplementary material, the authors pre-computed rf and adaboost scores for every SNV in a library of nearly ~16 million such sites collated from human RefSeq and Ensembl databases. The scores are probabilities of a particular SNV being splice-altering (0 to 1).
Exploratory analysis of the database
I performed an exploratory data analysis of chromosome 1 (chr1) SNVs from the database that was made available with the paper.
First, I just looked at where the SNVs on chrom 1 were located as classified by Ensembl region:
As can be seen from Fig 1, most of the SNVs are located in introns, exons, and splicing consensus sites according to their Ensembl records.
Next, I created histograms for the chrom 1 SNVs by their Ensembl classification, looking at rf scores only (keep in mind that the scale on the y-axis for the plots in Fig 2 and 3 differs dramatically between regions). The x-axis is the probability of being splice-altering according to the pre-computed rf score.
I noticed the fact that within ‘exonic’ regions on chrom 1, the rf scores take on a range of values from 0.0 to 1.0 in a broad distribution, while in other regions like ‘UTR3’, ‘UTR5’, ‘downstream’, etc… the distributions are narrowly skewed towards zero. For the ‘intronic’ region, the majority of sites have low probability of being splice-altering, while at the ‘splicing’ consensus sites, the vast majority are predicted to be splice-altering variants. This appears to make intuitive sense.
I performed the same analysis for the adaboost scores, as shown in Fig 3 (below). You can see that the adaboost scores take on a more binary distribution than the rf scores, with any individual SNV likely to be classified as ~1 or 0 according to the adaptive boosting method. Just like the rf scores, SNVs in ‘exonic’ regions are equally likely to be splice-altering as not, while those in ‘splicing’ regions are highly likely to be splice-altering. An SNV in an ‘intronic’ regions is ~3X more likely to have no effect on splicing.
Finally, I looked at the relationship between the two scoring methods for the SNVs that fall within the Ensembl-characterized ‘splicing’ regions on chrom 1. That scatter plot is shown below in Fig 4.
I suppose I was expecting a tight linear correlation between the two approaches, however the data show that the rf and adaboost methods differ substantially in their assessment of the collection of SNVs in these regions.
It is obvious from the plot below that there are many SNVs that the rf method considers to have low probability of being splice-altering that are found to have very high (>0.9) probability by the adaboost method.
This result would appear to suggest that if one is going to classify variants as “splice-altering” from this database, it would be best to consider both predictions or some combination of them rather than relying on either score alone if the goal is not to miss any potentially important sites. Conversely, if the goal is to only consider sites with very high likelihood of being splice-altering, a threshold could be set such that both scores need to be above 0.8, for example.