I worked on a project recently looking at tissue-specific nuclease expression. I made this interactive heatmap from the enormous GTEX dataset that looks at just nuclease gene expression (in TPM) across more than 50 tissues in the human body. It’s fun to play around with the interactive plot. This is the way data should be presented in 2017. I used the Plotly Python API for the chart.
Unfortunately, Plotly is now nearly $400/year if you want to use it for anything more than a few charts and there is no free option to keep sensitive research data private. There should be an exception for academic research, but there isn’t as far as I know.
Are differentially expressed (DE) genes also phenotypically important?
A new paper in Cell Reports utilizes RNA-seq and Tn-seq (the “tn” in tn-seq stands for transposon) to map the transcriptional and fitness changes in bacterial gene networks in response to stressors, like nutrient depletion and antibiotics.
The transcriptional response measures changes in gene expression as measured by RNA-seq. The fitness or phenotypic response describes the importance of each gene to the response. This is measured by a different assay, Tn-seq, which takes advantage of transposon insertion to selectively inactivate genes in the bacterial genome. Those genes that are depleted in the stressor condition are determined to be “high fitness” (owing to the fact that the bacteria without those genes died under stress).
First, before even considering DE genes, they found that there is no correlation between a gene’s transcriptional abundance (not fold change) and it’s fitness. While most high-fitness genes were also high abundance, many more high-abundance genes were not high-fitness. Thus, there is no useful relationship between a gene’s abundance and fitness.
Superficially, however, one might expect that genes that show large changes in abundance (i.e., large DE) in response to stressors would also be critical for the phenotypic expression of the bacteria’s stress response. That is, those genes with high differential expression would confer high fitness on the cell.
Testing the DE / high fitness relationship
As it turns out, little is actually known about this, and in this paper, Opijnen, et. al., set out to test the idea to determine if in fact high DE genes are also high fitness.
The researchers looked at comparing differential expression in response to a reduced nutrient environment (a type of minimal media) and an antibiotic stress versus high-fitness genes. They found no correlation:
You can see from the figure that high fitness genes (those on the far left of the x-axis), are not correlated to high DE genes. There are no genes in the upper-left quadrant of either plot, showing that there is no correlation between fitness and high DE in response to either nutrient or antibiotic stress.
Gene networks co-localize high DE and high fitness genes
Even though the authors found no correlation between DE and fitness changes for individual genes, they took the next step and constructed a metabolic gene network for the S. pneumoniae bacteria. Mapping the DE and fitness changes onto this network revealed a key finding: the high DE genes co-localize in pathways with the high fitness genes. That is, a biochemical pathway might have some members that are high DE, and others that are high fitness. An example of this is the shikimate pathway shown below:
The first half of the pathway consists of six genes with significant fitness changes (red boxes) in a row. The next seven genes, from the Trp branchpoint (blue dashed line) are not high-fitness, but do show high DE expression, with four reaching statistical significance. It is not really understood why this happens, but the authors theorize that having the bottom half of the pathway under transcriptional control allows the bacterial to control flux into Trp synthesis and other AA sub-pathways while always maintaining a stable supply of the starting point intermediates (the product of SP1374) through reversible, end product-regulated biosynthesis.
Transcriptomic data should not be used as a surrogate for functional importance
The authors point out that the reliance on trancriptonal abundance changes as markers for functional importance in bacteria, particularly in drug discovery efforts, may be misguided and need to be revisited in light of this and other studies. They also point on that the response to an “orderly” stressor (like nutrient depletion) for which the bacterium is evolved, is likely to be much more clearly defined on a network basis. While the response to a disorderly stressor (a novel antibiotic, for example) may provoke a disorderly transcriptional and fitnress response that can’t easily be interpreted from network analysis. This has important implications for the design of next-generation antibiotics.
Integrative genomics sheds new light on metastatic cancer
A new study from the University of Michigan Comprehensive Cancer Center has just been released that represents an in-depth look at the genomics of metastatic cancer, as opposed to primary tumors. This work involved DNA- and RNA-Seq of solid metastatic tumors of 500 adult patients, as well as matched normal tissue sequencing for detection of somatic vs. germline variants.
A good overview of the study at the level of scientific layperson can be found in this press release. It summarizes the key findings (many of which are striking and novel):
A significant increase in mutational burden of metastatic tumors vs. primary tumors.
A long-tailed distribution of mutational frequencies (i.e., few genes were mutated at a high rate, yet many genes were mutated).
About twelve percent of patients harbored germline variants that are suspected to predispose to cancer and metastasis, and 75% of those variants were in DNA repair pathways.
Across the cohort, 37% of patient tumors harbored gene fusions that either drove metastasis or suppressed the cells anti-tumor functions.
RNA-Seq showed that metastatic tumors are significantly de-differentiated, and fall into two classes: proliferative and EMT-like (endothelial-to-mesenchymal transition).
A brief look at the data
This study provides a high-level view onto the mutational burden of metastatic cancer vis-a-vis primary tumors. Figure 1C from the paper shows the comparison of mutation rates in different tumor types in the TCGA (The Cancer Genome Atlas) primary tumors and the MET500 (metastatic cohort).
Here we can see that in most cases (colored bars), metastatic cancers had statistically significant increases in mutational rates. The figure shows that tumors with low mutational rates “sped up” a lot as compared with those primary tumor types that already had high rates.
Supplemental Figure 1d (below) shows how often key tumor suppressor and oncogenes are altered in metastatic cancer vs. primary tumors. TP53 is found to be altered more frequently in metastatic thyroid, colon, lung, prostate, breast, and bladder cancers. PTEN is mutated more in prostate tumors. GNAS and PIK3CA are mutated more in thymoma, although this finding doesn’t reach significance in this case. KRAS is altered more in colon and esophagus cancers, but again, these findings don’t reach significance after multiple correction.
One other figure I’d like to highlight briefly is Figure 3C from the paper, shown below:
I wanted to mention this figure to illustrate the terrifying complexity of cancer. Knowing which oncogenes are mutated, in which positions, and the effects of those mutations on gene expression networks is not enough to understand tumor evolution and metastasis. There are also new genes being created that do totally new things, and these are unique on a per tumor basis. None of the above structures have ever been observed before, and yet they were all seen from a survey of just 500 cancers. In fact, ~40% of the tumors in the study cohort harbored at least one fusion suspected to be pathogenic.
There is much more to this work, but I will leave it to interested readers to go read the entire study. I think this work is obviously tremendously important and novel, and represents the future of personalized medicine. That is, a patient undergoing treatment for cancer will have their tumor or tumors biopsied and sequenced cumulatively over time to understand how the disease has evolved and is evolving, and to ascertain what weaknesses can be exploited for successful treatment.
Kallisto and sleuth are recently developed tools for the quantitation and statistical analysis of RNA-Seq data. The tools are fast and accurate, relying on pseudoalignment concepts rather than traditional alignment. They seem to be gaining popularity owing to ease of use and speed that makes them accessible to users on a laptop.
One thing that has been lacking is proper documentation of these tools. This appears to be changing as more tutorials and walkthroughs become available in the past few months.
I wanted to aggregate some of those here for my own reference and also to help others who may be looking for guidance.
Multiple hypothesis testing is a critical part of modern bioinformatic analysis. When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.
Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons. However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.
A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false. For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.
The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way. The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).
From the paper linked above, Table 1 lists possible covariates:
Differential expression analysis
Sum of read counts per gene across all samples 
Genome-wide association study (GWAS)
Minor allele frequency
Distance between the genetic variant and genomic location of the phenotype
Comembership in a topologically associated domain 
In simplified form, the IHW method takes the tests and groups them based on the supplied covariate. It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries. Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.
The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data. It can be accessed from within Bioconductor.
According to a new paper, basically, no. Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes. Furthermore, the authors claim that:
“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”
First, the authors show how aggressive trimming affects mappability in Figure 2:
You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases. Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:
Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots). Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.
One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads. In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:
Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down. There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.
Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming. A secondary length filter step can mitigate some of the damage. In the end, the use of trimming depends on your project type and goals. If you have tons of reads, some modest trimming and length filtering may not be too destructive. Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads. However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.