Are differentially expressed (DE) genes also phenotypically important?
A new paper in Cell Reports utilizes RNA-seq and Tn-seq (the “tn” in tn-seq stands for transposon) to map the transcriptional and fitness changes in bacterial gene networks in response to stressors, like nutrient depletion and antibiotics.
The transcriptional response measures changes in gene expression as measured by RNA-seq. The fitness or phenotypic response describes the importance of each gene to the response. This is measured by a different assay, Tn-seq, which takes advantage of transposon insertion to selectively inactivate genes in the bacterial genome. Those genes that are depleted in the stressor condition are determined to be “high fitness” (owing to the fact that the bacteria without those genes died under stress).
First, before even considering DE genes, they found that there is no correlation between a gene’s transcriptional abundance (not fold change) and it’s fitness. While most high-fitness genes were also high abundance, many more high-abundance genes were not high-fitness. Thus, there is no useful relationship between a gene’s abundance and fitness.
Superficially, however, one might expect that genes that show large changes in abundance (i.e., large DE) in response to stressors would also be critical for the phenotypic expression of the bacteria’s stress response. That is, those genes with high differential expression would confer high fitness on the cell.
Testing the DE / high fitness relationship
As it turns out, little is actually known about this, and in this paper, Opijnen, et. al., set out to test the idea to determine if in fact high DE genes are also high fitness.
The researchers looked at comparing differential expression in response to a reduced nutrient environment (a type of minimal media) and an antibiotic stress versus high-fitness genes. They found no correlation:
You can see from the figure that high fitness genes (those on the far left of the x-axis), are not correlated to high DE genes. There are no genes in the upper-left quadrant of either plot, showing that there is no correlation between fitness and high DE in response to either nutrient or antibiotic stress.
Gene networks co-localize high DE and high fitness genes
Even though the authors found no correlation between DE and fitness changes for individual genes, they took the next step and constructed a metabolic gene network for the S. pneumoniae bacteria. Mapping the DE and fitness changes onto this network revealed a key finding: the high DE genes co-localize in pathways with the high fitness genes. That is, a biochemical pathway might have some members that are high DE, and others that are high fitness. An example of this is the shikimate pathway shown below:
The first half of the pathway consists of six genes with significant fitness changes (red boxes) in a row. The next seven genes, from the Trp branchpoint (blue dashed line) are not high-fitness, but do show high DE expression, with four reaching statistical significance. It is not really understood why this happens, but the authors theorize that having the bottom half of the pathway under transcriptional control allows the bacterial to control flux into Trp synthesis and other AA sub-pathways while always maintaining a stable supply of the starting point intermediates (the product of SP1374) through reversible, end product-regulated biosynthesis.
Transcriptomic data should not be used as a surrogate for functional importance
The authors point out that the reliance on trancriptonal abundance changes as markers for functional importance in bacteria, particularly in drug discovery efforts, may be misguided and need to be revisited in light of this and other studies. They also point on that the response to an “orderly” stressor (like nutrient depletion) for which the bacterium is evolved, is likely to be much more clearly defined on a network basis. While the response to a disorderly stressor (a novel antibiotic, for example) may provoke a disorderly transcriptional and fitnress response that can’t easily be interpreted from network analysis. This has important implications for the design of next-generation antibiotics.
Integrative genomics sheds new light on metastatic cancer
A new study from the University of Michigan Comprehensive Cancer Center has just been released that represents an in-depth look at the genomics of metastatic cancer, as opposed to primary tumors. This work involved DNA- and RNA-Seq of solid metastatic tumors of 500 adult patients, as well as matched normal tissue sequencing for detection of somatic vs. germline variants.
A good overview of the study at the level of scientific layperson can be found in this press release. It summarizes the key findings (many of which are striking and novel):
- A significant increase in mutational burden of metastatic tumors vs. primary tumors.
- A long-tailed distribution of mutational frequencies (i.e., few genes were mutated at a high rate, yet many genes were mutated).
- About twelve percent of patients harbored germline variants that are suspected to predispose to cancer and metastasis, and 75% of those variants were in DNA repair pathways.
- Across the cohort, 37% of patient tumors harbored gene fusions that either drove metastasis or suppressed the cells anti-tumor functions.
- RNA-Seq showed that metastatic tumors are significantly de-differentiated, and fall into two classes: proliferative and EMT-like (endothelial-to-mesenchymal transition).
A brief look at the data
This study provides a high-level view onto the mutational burden of metastatic cancer vis-a-vis primary tumors. Figure 1C from the paper shows the comparison of mutation rates in different tumor types in the TCGA (The Cancer Genome Atlas) primary tumors and the MET500 (metastatic cohort).
Here we can see that in most cases (colored bars), metastatic cancers had statistically significant increases in mutational rates. The figure shows that tumors with low mutational rates “sped up” a lot as compared with those primary tumor types that already had high rates.
Supplemental Figure 1d (below) shows how often key tumor suppressor and oncogenes are altered in metastatic cancer vs. primary tumors. TP53 is found to be altered more frequently in metastatic thyroid, colon, lung, prostate, breast, and bladder cancers. PTEN is mutated more in prostate tumors. GNAS and PIK3CA are mutated more in thymoma, although this finding doesn’t reach significance in this case. KRAS is altered more in colon and esophagus cancers, but again, these findings don’t reach significance after multiple correction.
One other figure I’d like to highlight briefly is Figure 3C from the paper, shown below:
I wanted to mention this figure to illustrate the terrifying complexity of cancer. Knowing which oncogenes are mutated, in which positions, and the effects of those mutations on gene expression networks is not enough to understand tumor evolution and metastasis. There are also new genes being created that do totally new things, and these are unique on a per tumor basis. None of the above structures have ever been observed before, and yet they were all seen from a survey of just 500 cancers. In fact, ~40% of the tumors in the study cohort harbored at least one fusion suspected to be pathogenic.
There is much more to this work, but I will leave it to interested readers to go read the entire study. I think this work is obviously tremendously important and novel, and represents the future of personalized medicine. That is, a patient undergoing treatment for cancer will have their tumor or tumors biopsied and sequenced cumulatively over time to understand how the disease has evolved and is evolving, and to ascertain what weaknesses can be exploited for successful treatment.
Kallisto and sleuth are recently developed tools for the quantitation and statistical analysis of RNA-Seq data. The tools are fast and accurate, relying on pseudoalignment concepts rather than traditional alignment. They seem to be gaining popularity owing to ease of use and speed that makes them accessible to users on a laptop.
One thing that has been lacking is proper documentation of these tools. This appears to be changing as more tutorials and walkthroughs become available in the past few months.
I wanted to aggregate some of those here for my own reference and also to help others who may be looking for guidance.
kallisto (rapid RNA-Seq read quantification)
sleuth (statistical modeling and analysis)
sleuth tutorial blog posts:
Multiple hypothesis testing is a critical part of modern bioinformatic analysis. When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.
Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons. However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.
A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false. For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.
The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way. The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).
From the paper linked above, Table 1 lists possible covariates:
|Differential expression analysis||Sum of read counts per gene across all samples |
|Genome-wide association study (GWAS)||Minor allele frequency|
|Expression-QTL analysis||Distance between the genetic variant and genomic location of the phenotype|
|ChIP-QTL analysis||Comembership in a topologically associated domain |
|t-test||Overall variance |
|Two-sided tests||Sign of the effect|
|Various applications||Signal quality, sample size|
In simplified form, the IHW method takes the tests and groups them based on the supplied covariate. It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries. Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.
The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data. It can be accessed from within Bioconductor.
According to a new paper, basically, no. Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes. Furthermore, the authors claim that:
“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”
First, the authors show how aggressive trimming affects mappability in Figure 2:
You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases. Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:
Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots). Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.
One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads. In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:
Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down. There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.
Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming. A secondary length filter step can mitigate some of the damage. In the end, the use of trimming depends on your project type and goals. If you have tons of reads, some modest trimming and length filtering may not be too destructive. Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads. However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.
I recently published a brief introduction to the theory and practice of using kallisto to quantify RNA-Seq data on a desktop/laptop computer. Enjoy!
I just posted a screencast format tutorial on sleuth for RNA-Seq analysis. Check it out:
From this excellent paper on biological variability in RNA-Seq experiments (bold highlights are mine):
“Biological variability has important implications for the design, analysis and interpretation of RNA-sequencing experiments. […] If only a few biological replicates are available, it will be impossible to estimate the level of biological variability in expression for each gene in a study. Supplementary Table 1 summarizes a large number of published RNA-sequencing studies over the past three years. In every case, except for the two studies we analyzed here, conclusions were based on a small number (n ≤ 2) of biological replicates. One goal of RNA-sequencing studies may be simply to identify and catalog expression of new or alternative transcripts. However, all of these studies make broader biological statements on the basis of a very small set of biological replicates.
Our analysis has two important implications for studies performed with a small number of biological replicates. First, significant results in these studies may be due to biological variation and may not be reproducible; and second, it is impossible to know whether expression patterns are specific to the individuals in the study or are a characteristic of the study populations. These ideas are now widely accepted for DNA microarray experiments, where a large number of biological replicates are now required to justify scientific conclusions. Our analysis suggests that as biological variability is a fundamental characteristic of gene expression, sequencing experiments should be subject to similar requirements.”
If you are doing RNA-Seq, be very vigilant in your experimental design and find a way to incorporate more replicates, even at the expense of testing fewer comparisons. It’s better to test one comparison (tissue X vs. Y, for example) with 5 or more replicates than to test three comparisons (Tissue X vs. Y, Y vs. Z, and X vx Z) with only 2 replicates for each tissue type.
The differential expression analysis program EBSeq produces a number of data objects as part of the workflow, but there aren’t many options for visualization of the data.
The authors suggest the use of heatmap.2 in R:
However, this depends on knowing ahead of time your genes of interest. It is not practical to generate a heatmap with hundreds or thousands of DE genes.
I wanted to produce something approximating a volcano plot for EBSeq results. What I came up with initially was the following:
To make this plot, I had to grab some data arrays from the large object “EBOut” that is produced by calling the “EBTest” method and from the “GeneFC” object as below:
plot(GeneFC$PostFC, EBOut$PPDE, xlim =c(0,5), ylim=c(0,1), main="Control/Experimental FC vs. PPDE", sub=GeneFC$Direction, xlab="EBSeq Posterior Fold Change", ylab="EBSeq posterior prob of DE") abline(h=0.95)
The “abline” command places a horizontal line where the PPDE is equal to or greater than 95%. This would be equivalent to an FDR of 0.05.
If you want to inspect the plot interactively in R to identify gene names above the threshold and/or with large posterior fold changes you would use:
identify(GeneFC$PostFC, EBOut$PPDE, labels=names(GeneFC$PostFC))
To make it look more like a canonical volcano plot, I then tried:
plot(log2(GeneFC$RealFC), EBOut$PPDE, xlim =c(-5,5), ylim=c(0,1), main="Log2FoldChange vs. PPDE", xlab="EBSeq Log2 Fold Change", ylab="EBSeq PPDE")
Creating the following plot:
This is good, except I want to subset the data and add colors. To do this I need to create a new dataframe from the EBOut$PPDE and GeneFC$PostFC objects:
volc_df = data.frame(EBOut$PPDE, GeneFC$PostFC)
With everything in one dataframe, plotting and subsetting the data is easier. Inspired by this post at Stephen Turner’s “Getting Genetics Done” blog, I prepared my final colored volcano plot as follows:
with(volc_df, plot(log2(PostFC), PPDE, pch=20, main="Volcano Plot EBSeq", xlim=c(-5,5))) abline(h=0.95) with(subset(volc_df, PPDE > 0.95 & abs(log2(PostFC)) < 2), points(log2(PostFC), PPDE, pch=20, col="orange")) with(subset(volc_df, PPDE > 0.95 & abs(log2(PostFC)) < 2), points(log2(PostFC), PPDE, pch=20, col="red"))
The final plot looks like this: