Category Archives: rna seq

Helpful kallisto & sleuth RNASeq tutorials and blogs

Kallisto and sleuth are recently developed tools for the quantitation and statistical analysis of RNA-Seq data.  The tools are fast and accurate, relying on pseudoalignment concepts rather than traditional alignment.   They seem to be gaining popularity owing to ease of use and speed that makes them accessible to users on a laptop.

One thing that has been lacking is proper documentation of these tools.  This appears to be changing as more tutorials and walkthroughs become available in the past few months.

I wanted to aggregate some of those here for my own reference and also to help others who may be looking for guidance.

kallisto (rapid RNA-Seq read quantification)

kallisto github documentation

kallisto walkthroughs:

kallisto getting started tutorial

******

kallisto paper

kallisto pachter lab introductory blog post

kallisto videos:

kallisto introduction video tutorial

sleuth (statistical modeling and analysis)

sleuth documentation

sleuth pachter lab introductory blog post

sleuth walkthroughs:

getting started

batch effects

differential analysis with multiple conditions

multiple combined experiments

timecourse analysis

sleuth tutorial blog posts:

using kallisto and sleuth (ACHRI bioinformatics)

sample swaps and batch effects (ACHRI bioinformatics)

advanced RNA-Seq modeling of hybrid qualitative/quantitative factors (ACHRI bioinformatics)

sleuth videos:

intro to sleuth live Shiny R app

Beyond Benjamini-Hochberg: Independent Hypothesis Weighting (IHW) for multiple test correction

Multiple hypothesis testing is a critical part of modern bioinformatic analysis.  When testing for significant changes between conditions on many thousands of genes, for instance in an RNA-Seq experiment, the goal is maximize the number of discoveries while controlling the false discoveries.

Typically, this is done by using the Benjamini-Hochberg (BH) procedure, which aims to adjust p-values so that no more than a set fraction (usually 5%) of discoveries are false positives (FDR = 0.05). The BH method is better powered and less stringent than the more strict family-wise error rate (FWER) control, and therefore more appropriate to modern genomics experiments that make thousands of simultaneous comparisons.  However, the BH method is still limited by the fact that it uses only p-values to control the FDR, while treating each test as equally powered.

A new method, Independent Hypothesis Weighting (IHW), aims to take advantage of the fact that individual tests may differ in their statistical properties, such as sample size, true effect size, signal-to-noise ratio, or prior probability of being false.  For example, in an RNA-Seq experiment, highly-expressed genes may have better signal-to-noise than low-expressed genes.

The IHW method applies weights (a non-negative number between zero and one) to each test in a data-driven way.  The input to the method is a vector of p-values (just like BH/FDR) and a vector of continuous or categorical covariates (i.e., any data about each test that is assumed to be independent of the test p-value under the null hypothesis).

From the paper linked above, Table 1 lists possible covariates:

Application Covariate

Differential expression analysis Sum of read counts per gene across all samples [12]
Genome-wide association study (GWAS) Minor allele frequency
Expression-QTL analysis Distance between the genetic variant and genomic location of the phenotype
ChIP-QTL analysis Comembership in a topologically associated domain [16]
t-test Overall variance [9]
Two-sided tests Sign of the effect
Various applications Signal quality, sample size

In simplified form, the IHW method takes the tests and groups them based on the supplied covariate.  It then calculates the number of discoveries (rejections of the null hypothesis) using a set of weights. The weights are iterated until the method converges on the optimal weights for each covariate-based group that maximize the overall discoveries.  Additional procedures are employed to prevent over-fitting of the data and to make the procedure scale easily to millions of comparisons.

The authors of the method claim that IHW is better powered than BH for making empirical discoveries when working with genomic data.  It can be accessed from within Bioconductor.

 

Should you trim your RNA-Seq reads?

According to a new paper, basically, no.   Actually that is an oversimplification, but the authors find that quality trimming of RNA-Seq reads results in skewed gene expression estimates for up to 10% of genes.   Furthermore, the authors claim that:

“Finally, an analysis of paired RNA-seq/microarray data sets suggests that no or modest trimming results in the most biologically accurate gene expression estimates.”

First, the authors show how aggressive trimming affects mappability in Figure 2:

Rna-Seq reads trimming effects.
Influence of quality-based trimming on mappability of reads.

You can see that as the threshold becomes more severe (approaching 40), the number of RNA-Seq reads remaining drops off considerably, and the overall % mappability increases.  Overall, you’d think this would be a good thing, but it leads to problems as shown in Figure 4 of the paper:

Rna-Seq reads.
Isoform and gene expression levels after trimming.

Here you can see in (a) how increasingly aggressive trimming thresholds lead to increased differential expression estimates between untrimmed and trimmed data (red dots).  Section (b) and (c) also show that the number of biased isoforms and genes, respectively, increases dramatically as one approaches the Q40 threshold.

One way to correct this bias is to introduce length filtering on the quality-trimmed RNA-Seq reads.  In Figure 5, the authors show that this can recover much of the bias in gene expression estimates:

Isoform and gene expression levels after length-filtering.
Isoform and gene expression levels after length-filtering.

Now in (b-d) it is clear that as the length filter increases to 36, the number of biased expression estimates goes rapidly down.   There seems to be a sweet spot around L20, where you get the maximum decrease in bias while keeping as many reads as possible.

Taken together, the authors suggest that aggressive trimming can strongly bias gene expression estimates through the incorrect alignment of short reads that result from quality trimming.  A secondary length filter step can mitigate some of the damage.   In the end, the use of trimming depends on your project type and goals.  If you have tons of reads, some modest trimming and length filtering may not be too destructive.  Similarly, if your data are initially of low quality, trimming may be necessary to recover low-quality reads.  However, you should be restrained in your trimming and look at the resulting length distributions if possible before deciding on quality thresholds for your project.

Beware of biological variability in your *-Seq experiments

From this excellent paper on biological variability in RNA-Seq experiments (bold highlights are mine):

“Biological variability has important implications for the design, analysis and interpretation of RNA-sequencing experiments. […] If only a few biological replicates are available, it will be impossible to estimate the level of biological variability in expression for each gene in a study. Supplementary Table 1 summarizes a large number of published RNA-sequencing studies over the past three years. In every case, except for the two studies we analyzed here, conclusions were based on a small number (n ≤ 2) of biological replicates. One goal of RNA-sequencing studies may be simply to identify and catalog expression of new or alternative transcripts. However, all of these studies make broader biological statements on the basis of a very small set of biological replicates.

Our analysis has two important implications for studies performed with a small number of biological replicates. First, significant results in these studies may be due to biological variation and may not be reproducible; and second, it is impossible to know whether expression patterns are specific to the individuals in the study or are a characteristic of the study populations. These ideas are now widely accepted for DNA microarray experiments, where a large number of biological replicates are now required to justify scientific conclusions. Our analysis suggests that as biological variability is a fundamental characteristic of gene expression, sequencing experiments should be subject to similar requirements.”

If you are doing RNA-Seq, be very vigilant in your experimental design and find a way to incorporate more replicates, even at the expense of testing fewer comparisons.   It’s better to test one comparison (tissue X vs. Y, for example) with 5 or more replicates than to test three comparisons (Tissue X vs. Y, Y vs. Z, and X vx Z) with only 2 replicates for each tissue type.

 

Create a volcano plot on EBSeq output

The differential expression analysis program EBSeq produces a number of data objects as part of the workflow, but there aren’t many options for visualization of the data.

The authors suggest the use of heatmap.2 in R:

heatmap.2(NormalizedMatrix[GenesOfInterest,], scale=”row”, trace=”none”, Colv=F)

However, this depends on knowing ahead of time your genes of interest.  It is not practical to generate a heatmap with hundreds or thousands of DE genes.

I wanted to produce something approximating a volcano plot for EBSeq results.  What I came up with initially was the following:

A pseudo-volcano plot for EBSeq results.  The y-axis is posterior probability of differential expression (PPDE).
A pseudo-volcano plot for EBSeq results. The y-axis is posterior probability of differential expression (PPDE).

To make this plot, I had to grab some data arrays from the large object “EBOut” that is produced by calling the “EBTest” method and from the “GeneFC” object as below:

plot(GeneFC$PostFC, EBOut$PPDE, 
xlim =c(0,5), ylim=c(0,1), 
main="Control/Experimental FC vs. PPDE",
sub=GeneFC$Direction, xlab="EBSeq Posterior Fold Change", ylab="EBSeq posterior prob of DE")

abline(h=0.95)

The “abline” command places a horizontal line where the PPDE is equal to or greater than 95%.  This would be equivalent to an FDR of 0.05.

If you want to inspect the plot interactively in R to identify gene names above the threshold and/or with large posterior fold changes you would use:

identify(GeneFC$PostFC, EBOut$PPDE, labels=names(GeneFC$PostFC))

To make it look more like a canonical volcano plot, I then tried:

plot(log2(GeneFC$RealFC), EBOut$PPDE, 
xlim =c(-5,5), 
ylim=c(0,1), 
main="Log2FoldChange vs. PPDE", 
xlab="EBSeq Log2 Fold Change", ylab="EBSeq PPDE")

Creating the following plot:

Fig 2.  Log(2) Fold Change on the X-axis.
Fig 2. Log(2) Fold Change on the X-axis.

This is good, except I want to subset the data and add colors. To do this I need to create a new dataframe from the EBOut$PPDE and GeneFC$PostFC objects:

volc_df = data.frame(EBOut$PPDE, GeneFC$PostFC)

With everything in one dataframe, plotting and subsetting the data is easier. Inspired by this post at Stephen Turner’s “Getting Genetics Done” blog, I prepared my final colored volcano plot as follows:

with(volc_df, plot(log2(PostFC), PPDE, pch=20, main="Volcano Plot EBSeq", xlim=c(-5,5)))

abline(h=0.95)

with(subset(volc_df, PPDE > 0.95 & abs(log2(PostFC)) < 2), points(log2(PostFC), PPDE, pch=20, col="orange")) 

with(subset(volc_df, PPDE > 0.95 & abs(log2(PostFC)) < 2), points(log2(PostFC), PPDE, pch=20, col="red"))

The final plot looks like this:

Fig 3.  The final plot, with colored points for the DE genes, and higher fold change indicated with red.
Fig 3. The final plot, with colored points for the DE genes, and higher fold change indicated with red.