Tag Archives: NGS

My favorite talks from GLBIO2017 in Chicago

GLBIO2017

I just got back from Great Lakes Bio 2017 (GLBIO2017) at the University of Illinois-Chicago (UIC) campus.  It was a great meeting and I really enjoyed the quality of the research presented as well as the atmosphere of the campus and neighborhood.

I was very surprised by just how nice the Chicago “West Loop” neighborhood near Randolph Street and down towards Greektown really is.  I had some great meals, including a memorable Italian dinner at Formentos.

But the purpose of this post is to briefly describe a few of my favorite talks from the meeting.  So here goes, in no particular order:

Kevin White, Tempus Labs:

I was really impressed with Kevin White’s GLBIO2017 talk and demo of his company’s technology (despite the ongoing technical A/V issues!)  Tempus labs is a clinical sequencing company but also an informatics company focused on cancer treatment that seeks to pull together all of the disparate pieces of patient data that float around in EHR databases and are oftentimes not connected in meaningful ways.

The company sequences patient samples (whole exome and whole genome) and then also hoovers up reams of patient EHR data using Optical Character Recognition (OCR), Natural Language Processing (NLP), and human expert curation to turn the free-form flat text of medical records from different clinics and systems into a form of “tidy data” that can be accessed from an internal database.

Then, clinical and genomic data are combined for each patient in a deep-learning system that looks at treatments and outcomes for other similar patients and presents the clinician with charts that show how patients in similar circumstances fared with varying treatments, given certain facts of genotype and tumor progression, etc…  The system is pitched as “decision support” rather than artificial “decision making.”  That is, a human doctor is still the primary decider of treatment for each patient, but the Tempus deep learning system will provide expert support and suggest probabilities for success at each critical care decision point.

The system also learns and identifies ongoing clinical trials, and will present relevant trials to the clinician so that patients can be informed of possibly beneficial trials that they can join.

Murat Eren,  merenlab.org

Murat Eren’s talk on tracking microbial colonization in fecal microbiome transplantation (i.e., “poop pills”) was excellent and very exciting.  Although the “n” was small (just 4 donors and 2 recipients) he showed some very interesting results from transferring fecal microbiota (FM) from healthy individuals to those with an inflammatory bowel disease.

Among the interesting results are the fact that he was able to assemble 97 metagenomes in the 4 donor samples.  Following the recipients at 4 and 8-weeks post FM transplant showed that the microbial genomes could be classed into those that transfer and colonize permissively (both recipients), those that colonize one or the other recipient, and those that fail to colonize both.  Taxa alone did not explain why some microbes colonized easily, while other failed to colonize.

He also showed that 8 weeks post FM transplant, the unhealthly recipients had improved symptoms but also showed that in a PCA analysis of the composition of the recipient gut and the healthy human gut from 151 human microbiome project (HMP) samples, the recipients moved into the “healthy” HMP cluster from being extreme outliers on day 0.

He also investigated differential gene function enrichment between the permissive colonizers and the microbes that never colonized recipient’s guts and found that sporulation genes may be a negative factor driving the failure (or success) of transplantation.   He proposed that the recent and notable failure of the Seres microbiome drug in clinical trials may be owing to the fact that the company killed the live cultures in favor of more stable spore-forming strains when formulating the drug.  His work would suggest that these strains are less successful at colonizing new hosts.

Bo Zhang, 3D genome browser

With the ever-increasing volume of genomic and regulatory data and the complexity of that data, there is a need for accessible interfaces to it.   Bo Zhang’s group at Penn State has worked to make a new type of genome browser available that focuses on the 3D structure of the genome, pulling together disparate datatypes including chromatin interaction data, ChIP-Seq, RNA-Seq, etc…  You can also browse a complete view of the regulatory landscape and 3D architecture of any region of the genome.  You can also check the expression of any queried gene across hundreds of tissue/cell types measured by the ENCODE consortium.  On the virtual 4C page, they provide multiple methods to link distal cis-regulatory elements with their potential target genes, including virtual 4C, ChIA-PET and cross-cell-type correlation of proximal and distal DHSs.

The 3D Genome Browser flow chart.

 

All in all, GLBIO2017 was a very enjoyable and informative meeting where I met a lot of great colleagues and learned much.  I am looking forward to next year!

New paper out: metagenomics study of poultry production environments

I am happy to say that myself and my collaborators in the Department of Occupational and Environmental Health here at the University of Iowa have had our recent work on the bacterial composition of poultry bioaerosols (i.e., the dust that poultry workers breath during their tasks) published in Microbial Biotechnology.   

The key figure from this work is the following heat map that illustrates the top taxa that are common to all 21 samples:

mbt212380-fig-0003

What is remarkable about whole-genome shotgun metagenomics is that we are not only surveying bacterial DNA, but also viral, fungal, archaeal, and eukaryotic DNA in one experiment.  You can see from the figure that certain viruses are found in all samples, but it is bacteria, particularly Lactobacillus and Salinicoccus, that are the most abundant.

Stay tuned because we will have a paper coming out soon on the fungal composition of these samples as well.   In the case of this paper, and our next manuscript, it is the first time whole-genome shotgun metagenomics has been applied to the field of environmental health in poultry environments.

 

Sequencing depth for accurate SNP calling: bcbio case study

Intuitively, it is easy to grasp that the more sequencing depth (i.e., the greater the number of reads covering any given position in the genome) the more accurate the calling of SNPs and indels (insertions/deletions).   But how much difference does this actually make in the real world?  Is 20X coverage dramatically worse than 30X (considered a standard coverage depth on genomes)?

To find out, I conducted an experiment with the bcbio pipeline, a bioinformatics pipeline solution built in python that allows for automated and reproducible analyses on high-performance computing clusters.  One feature of bcbio is that it can perform validation surveys using high-confidence consensus calls from reference genomes like the NA12878 Coriell sample (from the Genome in a Bottle project).
For NA12878, researchers collated consensus SNP and indel calls from a large variety of sequencing technologies and calling methods to produce a very high-confidence callset for training other methods or validating a sequencing workflow.  bcbio includes these variant calls and can easily be setup to validate these calls against a sequenced NA12878 genome.

The sequencing depth experiment

I started with a NA12878 genome sequenced to 30X sequencing depth.  To compare shallower depths, I subsampled the data to generate 20X, 10X, etc…  [Please note: data was not subsetted randomly, rather “slices” were taken from the 30X dataset] To look at a 60X coverage datapoint, I combined data from two sequencing runs on both flow cells of a HiSeq4000 instrument.

The results after validation are shown in Figure 1 (depth of coverage is along the x-axis):

sequencing depth
Fig 1. SNP discovery as a function of increasing coverage of the GIAB validation sample.

 

The figure shows that, as expected, when sequencing depth decreases  the error rate increases, and SNP discovery declines.   It also makes the case for the commonly held view that 30X is enough coverage for genomes, since going to 60X leads to almost unnoticeable improvement in the % found and a slight increase in error.  Performance really degrades at 12X and below, with poor discovery rates and unacceptably high error rates.

I will be submitting a short manuscript to biorxiv.org soon describing this work in more detail.

Why you should think twice before rarefying your 16S data

According to a recent paper, the common practice of rarefying (randomly subsampling your 16S reads), is statistically incorrect and should be abandoned in favor of more sophisticated ‘mixture-model’ approaches favored by RNA-Seq analysis software.

The authors give a basic thought experiment to help clarify their reasoning.   I’ve copied the figure below.  It basically shows what happens when one rarifies library “B” from 1000 reads to 100 reads in order to match library “A.”  This is a standard procedure, even in the QIIME workflow.  By dividing the library size by 10, the variance of the data in B goes up by 10, and thus the statistical power to differentiate between OTU1 and OTU2’s abundances in samples A and B is lost.

Fig 1. An example of the effect of rarefying on statistical power.
Fig 1. An example of the effect of rarefying on statistical power.

As the authors point out in the paper:

“Although rarefying does equalize variances, it does so only by inflating the variances in all samples to the largest (worst) value among them at the cost of discriminating power (increased uncertainty). “

The solution to the rarefying problem is to take advantage of methods from the RNA-Seq world in order to determine differential taxonomic abundances, just as differential transcript abundances are calculated by the same methods.   The R package, phyloseq, provides an R container for taxonomic datasets from Qiime (in the BIOM format).  It also provides a convenient set of extensions that allow analysis of this data with RNA-Seq methods like edgeR and DESeq2.

Should you trim your RNA-Seq reads?

According to a new paper, basically, no.   Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes.   Furthermore, the authors claim that:

“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”

First, the authors show how aggressive trimming affects mappability in Figure 2:

Rna-Seq reads trimming effects.
Influence of quality-based trimming on mappability of reads.

You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases.  Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:

Rna-Seq reads.
Isoform and gene expression levels after trimming.

Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots).  Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.

One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads.  In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:

Isoform and gene expression levels after length-filtering.
Isoform and gene expression levels after length-filtering.

Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down.   There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.

Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming.  A secondary length filter step can mitigate some of the damage.   In the end, the use of trimming depends on your project type and goals.  If you have tons of reads, some modest trimming and length filtering may not be too destructive.  Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads.  However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.

Filtering variants for cancer mutational signature analysis

Recently, I’ve been working to help prepare a manuscript on Vestibular Schwannomas (VS), a type of benign cancer of the myelin-forming cells along the nerves of the ear.  I’ve been thinking a lot about strategies for filtering exome variant calls to feed into mutational signature analysis.

Mutational signatures are important because they describe the types of mutational processes operating on the genome of the tumor cells.  Many of these processes are known (see the COSMIC database), however, some are entirely novel.  The variants that are used for calculating such signatures are somatic in nature, and have to be carefully curated from the raw variant calls that you get from a pipeline like GATK.

Looking at the existing literature, I find that there is no common or “best practices” methodology for filtering variants in whole exome data.  Some groups are very stringent, others less so.  The first step in most cases is to just subtract normal variant calls from tumor in most cases.  However, there are further filtering steps that should be undertaken.

If I had to describe some overall commonalities in the literature approaches to somatic variant filters, it could include:

1) removing variants that are present in dbSNP or 1000genomes or other non-cancer exome data
2) taking only variants in coding regions (exons) or splicing sites
3) variants must appear in more than X reads in the tumor, and fewer than X reads in the normal (generally ~5 and ~2, respectively)
4) subtraction of “normals” from “tumor” (either pooled normals, or paired)
5) variant position must be covered by a minimum depth (usually > 10X)
6) throwing away reads from low mapping quality (MQ) regions

Some papers only consider non-synonymous variants, but for mutational signatures, to me it makes sense to take all validated variants (especially in exome data because you are starting with fewer raw variant calls than whole genome data).

As far as actual numbers of variants that are “fed” into the mutational signature analysis, most papers do not report this directly (surprisingly).  If you dig around in the SI sections, sometimes you can find it indirectly.

It looks like, generally, the number of variants is somewhere around 10,000 for papers dealing with specific tumor types (not pan-cancer analyses of public databases). Several papers end up with ~1000 variants per tumor (ranging from 1,000 up to 7,000).  So with 10 tumors sequenced, that would be 10,000 filtered, high-confidence SNVs.

If you’re working on exome mutational signature analysis and you have your own filtering criteria, I’d love for you to share it in the comments.

Hands-on with cancer mutational signatures

In my last post, I wrote about the biological context of mutational signatures in cancer and how a recent series of papers has addressed this notion by creating an algorithm for extracting signatures from a catalogue of tumor SNPs.

nature12477-f2

In this follow-up post, I wanted to offer practical advice, based on my experience, about how to prepare data for mutational signature analysis and how to interpret the results.

First, in order to analyze your SNPs for mutational signatures, one needs to derive the trimer context of each SNP (i.e., the upstream base and downstream base flanking the SNP) for technical reasons described in the paper linked above.

In order to do this,  a reference genome must be queried with the positions of the SNPs in order to find those bases adjacent to them.  One could either query a remote database, like Entrez Nucleotide, or download an entire 40GB reference genome and search it locally.

In my code, I opted for the former option:  querying the NCBI/Entrez Nucleotide database using HTTP requests.   The advantage of this approach is that I can reuse the same code to query multiple reference genomes (e.g., hg38 vs. hg19), depending on the changing needs of future projects.

The relevant section of code is as follows:

def get_record(chrom, pos):               
    '''Fetch record, and minus/plus one base from Entrez'''
    try:
        handle = Entrez.efetch(db="nucleotide", 
                    id=hg19_chrom[chrom], 
                    rettype="fasta", 
                    strand=1, 
                    seq_start=int(pos) - 1, 
                    seq_stop=int(pos) + 1)
                       
        record = SeqIO.read(handle, "fasta")
        handle.close()
    except BaseException:
        return None
    
    return record

 

You can see from the code that I am using a dictionary (‘hg19_chrom’)  to translate between chromosome numbers and their Entrez Nucleotide IDs in the eFetch request.

The disadvantage of this approach is that Entrez HTTP tools limits the user to 3 queries per second (in fact this limitation is hard-coded into Biopython).  Even with my mediocre coding skills, this turns out to be the rate-limiting step.   Thus, this code would have to be run pretty much overnight for any sizable number of SNPs (~50,000 SNPs would take ~4.6 hrs).  However, it’s easy to let the script run overnight, so this wasn’t a deal breaker for me.

In the third and final post next two posts on this topic I will address how to create the MatLab .mat file from the output of this script and how to compare the signatures generated by the MatLab framework to the COSMIC reference database in a non-biased way.

 

Mutational signatures in cancer with DNA-Seq

A recent collaboration with a clinician here at UI hospital and clinics introduced me to the idea of mutational signatures in cancer.  Characterizing mutational signatures is made possible by the falling cost and increasing accuracy of whole-genome sequencing methods.  Tumors are sequenced across the entire genome and the catalog of somatic mutations (i.e, SNPs) is used to compute the mutational signatures of a tumor’s genome.

The idea is that the collection of somatic mutations found in a tumor  are the result of a variety of defective DNA-repair or DNA-replication machinery combined with the action of known or unknown mutagens and environmental exposures.  The processes operate over time and leave a “footprint” in the tumor DNA that can be examined.  These sum of all of the mutational processes operating within a tumor cell is a distinct mutational “signature” that differs by tumor types.

For example, in lung cancer, the bulk of somatic mutations are C>A transversions resulting from chronic exposure to tobacco smoke.  In melanoma, the predominant mutation type is C>T and CC>TT at dipyrimidines, a mutation type associated with UV-light exposure.  And in colorectal cancer, defective DNA mismatch repair contributes the majority of the mutations.

Mutational signatures of a cancer by the operation of several mutational processes over time.
Mutational signature of a cancer by the operation of several mutational processes over time. From http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3990474/figure/fig0005/. Used under CC License BY 3.0.

A recent paper in Nature has formalized this notion of mutational signatures in tumors and provided a mathematical framework (written in MatLab) for assessing how many and which signatures are operational within an uncharacterized tumor type (generally there between 2 and 6 processes).

In the paper, the authors analyzed almost 5 million somatic cancer SNPs and identified 21 unique signatures of mutational processes through a mathematical process of deconvolution, followed by experimental validation.  A curated catalog of the most current signatures based on available sequence data can be found at the COSMIC database.

In part 2 of this post, I’ll go into more detail on the mutational signatures and link to some python code I’ve written to help get flat-file lists of SNPs into the correct form for easy input into the MatLab framework.