Genomic landscape of metastatic cancer

Integrative genomics sheds new light on metastatic cancer

A new study from the University of Michigan Comprehensive Cancer Center has just been released that represents an in-depth look at the genomics of metastatic cancer, as opposed to primary tumors.   This work involved DNA- and RNA-Seq of solid metastatic tumors of 500 adult patients, as well as matched normal tissue sequencing for detection of somatic vs. germline variants.

tl;dr:

A good overview of the study at the level of scientific layperson can be found in this press release.  It summarizes the key findings (many of which are striking and novel):

  • A significant increase in mutational burden of metastatic tumors vs. primary tumors.
  • A long-tailed distribution of mutational frequencies (i.e., few genes were mutated at a high rate, yet many genes were mutated).
  • About twelve percent of patients harbored germline variants that are suspected to predispose to cancer and metastasis, and 75% of those variants were in DNA repair pathways.
  • Across the cohort, 37% of patient tumors harbored gene fusions that either drove metastasis or suppressed the cells anti-tumor functions.
  • RNA-Seq showed that metastatic tumors are significantly de-differentiated, and fall into two classes:  proliferative and EMT-like (endothelial-to-mesenchymal transition).

 A brief look at the data

This study provides a high-level view onto the mutational burden of metastatic cancer vis-a-vis primary tumors.  Figure 1C from the paper shows the comparison of mutation rates in different tumor types in the TCGA (The Cancer Genome Atlas) primary tumors and the MET500 (metastatic cohort).

Mutational burden in metastatic cancer compared to primary tumors.

 

Here we can see that in most cases (colored bars), metastatic cancers had statistically significant increases in mutational rates.   The figure shows that tumors with low mutational rates “sped up” a lot as compared with those primary tumor types that already had high rates.

Supplemental Figure 1d (below) shows how often key tumor suppressor and oncogenes are altered in metastatic cancer vs. primary tumors.  TP53 is found to be altered more frequently in metastatic thyroid, colon, lung, prostate, breast, and bladder cancers.   PTEN is mutated more in prostate tumors.  GNAS and PIK3CA are mutated more in thymoma, although this finding doesn’t reach significance in this case.  KRAS is altered more in colon and esophagus cancers, but again, these findings don’t reach significance after multiple correction.

Comparison of genetic alteration frequencies in metastatic and primary tumors.

 

One other figure I’d like to highlight briefly is Figure 3C from the paper, shown below:

Molecular structure of novel, potentially activating gene fusions in the metastatic tumors.

I wanted to mention this figure to illustrate the terrifying complexity of cancer.   Knowing which oncogenes are mutated, in which positions, and the effects of those mutations on gene expression networks is not enough to understand tumor evolution and metastasis.  There are also new genes being created that do totally new things, and these are unique on a per tumor basis.   None of the above structures have ever been observed before, and yet they were all seen from a survey of just 500 cancers.   In fact, ~40% of the tumors in the study cohort harbored at least one fusion suspected to be pathogenic.

There is much more to this work, but I will leave it to interested readers to go read the entire study.   I think this work is obviously tremendously important and novel, and represents the future of personalized medicine.  That is, a patient undergoing treatment for cancer will have their tumor or tumors biopsied and sequenced cumulatively over time to understand how the disease has evolved and is evolving, and to ascertain what weaknesses can be exploited for successful treatment.

Helpful kallisto & sleuth RNASeq tutorials and blogs

Kallisto and sleuth are recently developed tools for the quantitation and statistical analysis of RNA-Seq data.  The tools are fast and accurate, relying on pseudoalignment concepts rather than traditional alignment.   They seem to be gaining popularity owing to ease of use and speed that makes them accessible to users on a laptop.

One thing that has been lacking is proper documentation of these tools.  This appears to be changing as more tutorials and walkthroughs become available in the past few months.

I wanted to aggregate some of those here for my own reference and also to help others who may be looking for guidance.

kallisto (rapid RNA-Seq read quantification)

kallisto github documentation

kallisto walkthroughs:

kallisto getting started tutorial

******

kallisto paper

kallisto pachter lab introductory blog post

kallisto videos:

kallisto introduction video tutorial

sleuth (statistical modeling and analysis)

sleuth documentation

sleuth pachter lab introductory blog post

sleuth walkthroughs:

getting started

batch effects

differential analysis with multiple conditions

multiple combined experiments

timecourse analysis

sleuth tutorial blog posts:

using kallisto and sleuth (ACHRI bioinformatics)

sample swaps and batch effects (ACHRI bioinformatics)

advanced RNA-Seq modeling of hybrid qualitative/quantitative factors (ACHRI bioinformatics)

sleuth videos:

intro to sleuth live Shiny R app

My favorite talks from GLBIO2017 in Chicago

GLBIO2017

I just got back from Great Lakes Bio 2017 (GLBIO2017) at the University of Illinois-Chicago (UIC) campus.  It was a great meeting and I really enjoyed the quality of the research presented as well as the atmosphere of the campus and neighborhood.

I was very surprised by just how nice the Chicago “West Loop” neighborhood near Randolph Street and down towards Greektown really is.  I had some great meals, including a memorable Italian dinner at Formentos.

But the purpose of this post is to briefly describe a few of my favorite talks from the meeting.  So here goes, in no particular order:

Kevin White, Tempus Labs:

I was really impressed with Kevin White’s GLBIO2017 talk and demo of his company’s technology (despite the ongoing technical A/V issues!)  Tempus labs is a clinical sequencing company but also an informatics company focused on cancer treatment that seeks to pull together all of the disparate pieces of patient data that float around in EHR databases and are oftentimes not connected in meaningful ways.

The company sequences patient samples (whole exome and whole genome) and then also hoovers up reams of patient EHR data using Optical Character Recognition (OCR), Natural Language Processing (NLP), and human expert curation to turn the free-form flat text of medical records from different clinics and systems into a form of “tidy data” that can be accessed from an internal database.

Then, clinical and genomic data are combined for each patient in a deep-learning system that looks at treatments and outcomes for other similar patients and presents the clinician with charts that show how patients in similar circumstances fared with varying treatments, given certain facts of genotype and tumor progression, etc…  The system is pitched as “decision support” rather than artificial “decision making.”  That is, a human doctor is still the primary decider of treatment for each patient, but the Tempus deep learning system will provide expert support and suggest probabilities for success at each critical care decision point.

The system also learns and identifies ongoing clinical trials, and will present relevant trials to the clinician so that patients can be informed of possibly beneficial trials that they can join.

Murat Eren,  merenlab.org

Murat Eren’s talk on tracking microbial colonization in fecal microbiome transplantation (i.e., “poop pills”) was excellent and very exciting.  Although the “n” was small (just 4 donors and 2 recipients) he showed some very interesting results from transferring fecal microbiota (FM) from healthy individuals to those with an inflammatory bowel disease.

Among the interesting results are the fact that he was able to assemble 97 metagenomes in the 4 donor samples.  Following the recipients at 4 and 8-weeks post FM transplant showed that the microbial genomes could be classed into those that transfer and colonize permissively (both recipients), those that colonize one or the other recipient, and those that fail to colonize both.  Taxa alone did not explain why some microbes colonized easily, while other failed to colonize.

He also showed that 8 weeks post FM transplant, the unhealthly recipients had improved symptoms but also showed that in a PCA analysis of the composition of the recipient gut and the healthy human gut from 151 human microbiome project (HMP) samples, the recipients moved into the “healthy” HMP cluster from being extreme outliers on day 0.

He also investigated differential gene function enrichment between the permissive colonizers and the microbes that never colonized recipient’s guts and found that sporulation genes may be a negative factor driving the failure (or success) of transplantation.   He proposed that the recent and notable failure of the Seres microbiome drug in clinical trials may be owing to the fact that the company killed the live cultures in favor of more stable spore-forming strains when formulating the drug.  His work would suggest that these strains are less successful at colonizing new hosts.

Bo Zhang, 3D genome browser

With the ever-increasing volume of genomic and regulatory data and the complexity of that data, there is a need for accessible interfaces to it.   Bo Zhang’s group at Penn State has worked to make a new type of genome browser available that focuses on the 3D structure of the genome, pulling together disparate datatypes including chromatin interaction data, ChIP-Seq, RNA-Seq, etc…  You can also browse a complete view of the regulatory landscape and 3D architecture of any region of the genome.  You can also check the expression of any queried gene across hundreds of tissue/cell types measured by the ENCODE consortium.  On the virtual 4C page, they provide multiple methods to link distal cis-regulatory elements with their potential target genes, including virtual 4C, ChIA-PET and cross-cell-type correlation of proximal and distal DHSs.

The 3D Genome Browser flow chart.

 

All in all, GLBIO2017 was a very enjoyable and informative meeting where I met a lot of great colleagues and learned much.  I am looking forward to next year!

Search speed comparison: naive exact vs. boyer-moore vs. k-mer index

Recently, I’ve been working my way through Ben Langmead’s excellent introduction to “Algorithms for DNA sequencing” on Coursera.com.    The class is a fascinating and well-taught intro to concepts about DNA short read alignment and assembly methods.

As part of the course, we have implement or modify python code relating to several simple matching algorithms, including the “naive exact” (NEM) matching method, the “boyer-moore” (BM) method, and a k-mer index approach.

I was curious about speed, so I made a figure showing the computational time that each approach takes.  P and T refer to the length of the short read to be aligned and the genome to align to, respectively.

Figure 1. Comparing the speed of the NEM, BM, and K-mer search methods on long and short patterns (P) and texts (T). The y-axis is on a log-scale.

Note that the y-axis is a log scale in units of microseconds.  Right away, it is obvious that k-mer index methods are orders of magnitude faster than ‘online’ methods like NEM and BM.

Also of interest is the fact that as the pattern gets shorter, the advantage of BM preprocessing of the pattern gets smaller.  You can see that going from 30 to 11 pattern length negates any advantage to BM searching.

 

Beyond Benjamini-Hochberg: Independent Hypothesis Weighting (IHW) for multiple test correction

Multiple hypothesis testing is a critical part of modern bioinformatic analysis.  When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.

Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons.  However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.

A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false.  For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.

The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way.  The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).

From the paper linked above, Table 1 lists possible covariates:

Application Covariate

Differential expression analysis Sum of read counts per gene across all samples [12]
Genome-wide association study (GWAS) Minor allele frequency
Expression-QTL analysis Distance between the genetic variant and genomic location of the phenotype
ChIP-QTL analysis Comembership in a topologically associated domain [16]
t-test Overall variance [9]
Two-sided tests Sign of the effect
Various applications Signal quality, sample size

In simplified form, the IHW method takes the tests and groups them based on the supplied covariate.  It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries.  Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.

The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data.  It can be accessed from within Bioconductor.

 

Unix one-liner to convert VCF to Oncotator format

Here is a handy unix one-liner to process mutect2 output VCF files into the 5 column, tab-separated format required by Oncotator for input (Oncotator is a web-based application that annotates human genomic point mutations and indels with transcripts and consequences). The output of Oncotator is a MAF-formatted file that is compatible with MutSigCV.

#!/bin/bash
FILES='*.vcf.gz'
for file in $FILES
do
zcat $file | grep -v "GL000*" | grep -v "FILTER" | grep "PASS" | cut -d$'\t' -f 1-5 | awk '$3=$2' | awk '$1="chr"$1' > $file.tsv
done

Breaking this down we have:

“zcat $file” :  read to stdout each line of a gzipped file

“grep -v “GL000*” :  exclude any variant that doesn’t map to a  named chromosome

“grep -v “FILTER” : exclude filter header lines

“grep “PASS””:  include all lines that pass mutect2 filters

“cut -d$’\t’ -f 1-5”  : cut on tabs and keep fields one through five

“awk ‘$3=$2’ :  set column 3 equal to column 2, i.e., start and end position are equal

“awk $1=’chr’$1″” : set column one equal to ‘chr’ plus column one (make 1 = chr1)