New job: Director of IIHG Bioinformatics

I’m thrilled to report that I’ve been promoted to the position of Director of our bioinformatics group here at the University of Iowa. We are within the Iowa Institute of Human Genetics (IIHG) and we support clinical activities in the institute, but also a wide array of research collaborations across the University.

I have a lot of goals and ideas for the group and look forward to working to implement those going forward. I may not be able to write posts here as often, but I’ll try to keep up with it. We also have a new twitter account: @iowabioinfo. Please follow us there.

Conference report: GLBIO2019

I just returned from another great experience at Great Lakes Bio 2019 (#GLBIO2019), a regional meeting of the International Society of Computational Biologists (ISCB). Below I’ll summarize briefly a few of the talks that I found most interesting to me personally (there were several parallel tracks, so I did not attend all talks).

Docker workshop taught by Sara Stevens

On Sunday of the conference, I attended a 3-hour workshop introducing Docker technology held in the beautiful and very modern Wisconsin Institutes of Discovery building. The course was taught by Sara Stevens, an expert in data science and bioinformatics with the data science hub at UWisconsin-Madison.

We worked through an initial “hello world” application of Docker on our laptops, writing a Dockerfile that became an image and finally a container instance of that image:

mchiment@MNE762:~/Desktop/docker-playground/my-greeting$cat Dockerfile 
#specify the base image
FROM alpine
#specify what to build
RUN /bin/echo "greeting!" > /root/my_message
#give default command
CMD ["/bin/cat", "/root/my_message"]

Then we progressed into more complex Dockerfile builds, including one that would install a mini-python distro and run a program. This included installing some libraries with pip within the image, and running a script.

Overall, I learned a lot and got a good grasp of the Docker basics to build upon for future work.

Integrative analysis for fine mapping of genetic variants, Sunduz Keles

In this talk, the issue of how to make sense of GWAS data was addressed. If you have a collection of SNPs, how to you follow up with which genes to study, which mechanisms to propose, etc… This talk introduced a tool, atSNP Search, which uses transcription factor position-weight matrices (PWMs) and assesses the impact of a SNP on TF DNA-binding activity within the local area of the SNP using the PWMs.

From the website:

atSNP identifies and quantifies best DNA sequence matches to the transcription factor PWMs with both the reference and the SNP alleles in a small window around the SNP location (up to +/- 30 base pairs and considering subsequences spanning the SNP position). It evaluates statistical significance of the match scores with each allele and calculates statistical significance of the score difference between the best matches with the reference and SNP alleles.

The talk also introduced a method, “FM-HighLD”, which asks whether you can substitute functional annotations of SNPs for “massive parallel reporter arrays” (MPRAs) which are considered “gold standard” for SNP/eQTL function. The idea is to use MPRA results and their correlation to functional annotations to calibrate the model and then apply that to eQTLs or GWAS SNPs with no MPRA results, but functional annotations from public databases.

refine.bio

There is over $4 Billion worth of publicly-funded RNAseq and microarray data in the public repositories. Studies have shown that analysts can spend up to 30% of a project’s time just searching, accessing, downloading, and preprocessing these data.

Refine.bio is an attempt to “harmonize” thousands of gene expression datasets by downloading and pre-processing them using a common pipeline and common reference. This is only possible owing to the innovation of pseudo-alignment in methods like kallisto and salmon.

In the background, refine.bio runs on Amazon Web Services, which gives the project unlimited compute and storage to scale according to their needs. In addition to standardized gene expression processing, sample metadata are also harmonized, where keywords are mapped to standard ontologies for ease of comparison.

Monitoring crude oil spills with 16S and machine-learning, Stephen Techtmann

In this work, Dr. Techtmann’s group was interested in looking at the response of fresh water microbiomes drawn from Lake Superior to the introduction of different types of oil (a complex chemical substance that acts as a carbon food source). Their team drew lake water samples and incubated them with different oils (heavy crude, refined crude, etc…) and then assessed taxonomic abundance using 16S amplicon gene sequencing.

The taxa abundances were used to train a Random Forest model to predict oil contamination status. RF methods produced a model with extremely high accuracy, AUC > 0.9. They found that two taxa predominantly distinguish the oil samples from the lake water samples.

Bcbio RNA-seq ‘under the hood’

Bcbio is a configuration-based pipeline manager for common NGS workflows. It uses a YAML-config file to set all of the inputs and specifications for pipeline. I’ve used bcbio for dozens of RNA-seq projects, but I’ve never known exactly what it is doing during the pipeline itself. This is because in order to see the exact commands being run you have to either dig into the code, or dig through the log files.

Digging through code is difficult because the code base is large and there are many different pieces of code that call each other. Digging through the logs is difficult when there are dozens of samples (each command is repeated dozens of times, leading to log files with thousands of lines). Well, I finally gave in and sorted through the RNA-seq pipeline command logs to identify the unique steps that bcbio (version 1.0.8) is performing in order to produce its results. I was able to identify 21 unique steps that are performed on each sample.

The difficulty of figuring out exactly what a configuration-based pipeline like bcbio is going to do is one argument in favor of using software like snakemake or nextflow to create or adapt existing pipelines, where the actual steps in the pipeline are made very explicit in “process” blocks. I’m going to be writing more about NextFlow in upcoming posts.

Of these 21 steps, 17 steps all deal with creating a BAM file and then manipulating that BAM file or calculating something about the BAM file. The remainder mainly deal with pseudo-alignment using salmon. It’s somewhat ironic that most of the pipeline and computational time is taken up with creating and manipulating BAM files since I only ever use the salmon pseudo-alignments in my downstream analysis.

Here are the 21 steps of the bcbio RNA-seq workflow (I’ve deleted the long, user-specific file paths to show just the commands):

Step 1. Align with Hisat2

hisat2 --new-summary -x bcbio-1.0.8/genomes/Hsapiens/hg38/hisat2/hg38 -p 16 --phred33 --rg-id SW872_CAMTA1_rep1 --rg PL:illumina --rg PU:1_2019-03-11_to_setup_bcbio --rg SM:SW872_CAMTA1_rep1 -1 SW872_CAMTA1_rep1_R1.fastq.gz -2 SW872_CAMTA1_rep1_R2.fastq.gz
--known-splicesite-infile bcbio-1.0.8/genomes/Hsapiens/hg38/rnaseq/ref-transcripts-splicesites.txt 

Step 2/3. Pipe to bamsormadup and redirect to sorted BAM

| bamsormadup inputformat=sam threads=12 tmpfile=work/bcbiotx/tmplsr55j/SW872_CAMTA1_rep1-sort-sorttmp-markdup 
SO=coordinate indexfilename=work/bcbiotx/tmplsr55j/SW872_CAMTA1_rep1-sort.bam.bai >  work/bcbiotx/tmplsr55j/SW872_CAMTA1_rep1-sort.bam

Step 4. Index BAM

samtools index -@ 16 work/align/SW872_TAZ4SA_rep3/SW872_TAZ4SA_rep3-sort.bam /work/bcbiotx/tmpsqOnZQ/SW872_TAZ4SA_rep3-sort.bam.bai

Step 5. Samtools sort by read names

samtools sort -@ 16 -m 2457M -O BAM -n -T work/bcbiotx/tmpqFmCaf/SW872_CAMTA1_rep1-sort.nsorted-sort -o /work/bcbiotx/tmpqFmCaf/SW872_CAMTA1_rep1-sort.nsorted.bam /work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam

Step 6. Sambamba view to select only primary alignments

sambamba view -t 16 -f bam -F "not secondary_alignment" work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.nsorted.bam> work/bcbiotx/tmp0zhZuj/SW872_CAMTA1_rep1-sort.nsorted.primary.bam

Step 7. FeatureCounts to count primary alignments in BAM

featureCounts -a /Dedicated/IIHG-argon/bcbio-1.0.8/genomes/Hsapiens/hg38/rnaseq/ref-transcripts.gtf -o work/bcbiotx/tmp77coEk/SW872_CAMTA1_rep1.counts -s 0 -p -B -C work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.nsorted.primary.bam

Step 8. Gffread to write a fasta file with spliced exons

gffread -g /Dedicated/IIHG-argon/bcbio-1.0.8/genomes/Hsapiens/hg38/seq/hg38.fa -w work/bcbiotx/tmpNpBGRC/hg38.fa.tmp /Dedicated/IIHG-argon/bcbio-1.0.8/genomes/Hsapiens/hg38/rnaseq/ref-transcripts.gtf

Step 9. Build the salmon index

salmon index -k 31 -p 16 -i /work/bcbiotx/tmpTQDS7X/hg38 -t work/inputs/transcriptome/hg38.fa

Step 10. Pseudo-alignment and quantification

salmon quant -l IU -i work/salmon/index/hg38 -p 16 --gcBias -o work/bcbiotx/tmpE_RRDN/quant   -1 <(gzip -cd /merged/SW872_CAMTA1_rep1_R1.fastq.gz) -2 <(gzip -cd /merged/SW872_CAMTA1_rep1_R2.fastq.gz) --numBootstraps 30

Step 11. Convert salmon output to sleuth format

Rscript -e 'library("wasabi"); prepare_fish_for_sleuth(c("work/bcbiotx/tmpE_RRDN/quant"))'

Step 12. Downsample BAM file with samtools view

samtools view -O BAM -@ 16 -o work/bcbiotx/tmphaXqSf/SW872_CAMTA1_rep1-sort-downsample.bam -s 42.269 work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam

Step 13. FASTQC on downsampled BAM

export PATH=/Dedicated/IIHG-argon/bcbio-1.0.8/anaconda/bin:$PATH &&  /Dedicated/IIHG-argon/bcbio-1.0.8/galaxy/../anaconda/bin/fastqc -d work/qc/SW872_CAMTA1_rep1/bcbiotx/tmpgOv610 -t 16 --extract -o work/qc/SW872_CAMTA1_rep1/bcbiotx/tmpgOv610 -f bam work/qc/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort-downsample.bam

Step 14. Run Qualimap RNAseq on BAM

unset DISPLAY && export PATH=/Dedicated/IIHG-argon/bcbio-1.0.8/anaconda/bin:$PATH &&  /Dedicated/IIHG-argon/bcbio-1.0.8/galaxy/../anaconda/bin/qualimap rnaseq -outdir work/bcbiotx/tmpACJXgn/SW872_CAMTA1_rep1 -a proportional -bam work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam -p non-strand-specific -gtf /Dedicated/IIHG-argon/bcbio-1.0.8/genomes/Hsapiens/hg38/rnaseq/ref-transcripts.gtf --java-mem-size=59g

Step 15. A SED command (not sure exactly what it does)

sed -i 's/bam file = .*/bam file = SW872_CAMTA1_rep1.bam/' work/bcbiotx/tmpACJXgn/SW872_CAMTA1_rep1/rnaseq_qc_results.txt

Step 16. Mark duplicates on the BAM file

bammarkduplicates tmpfile=work/bcbiotx/tmpNdl3wy/SW872_CAMTA1_rep1-sort-dedup-markdup markthreads=16 I=work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam O=work/bcbiotx/tmprVQeKM/SW872_CAMTA1_rep1-sort-dedup.bam

Step 17. Index de-duplicated BAM file

samtools index -@ 16 work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort-dedup.bam work/bcbiotx/tmpFAzLLT/SW872_CAMTA1_rep1-sort-dedup.bam.bai

Step 18. Use Sambamba view to create duplicate metrics

sambamba view --nthreads 16 --count -F 'duplicate and not unmapped' work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort-dedup.bam >> work/bcbiotx/tmpJS4s1r/dup_metrics.txt

Step 19. Use Sambamba to create mapping metrics

sambamba view --nthreads 16 --count -F 'not unmapped' work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort-dedup.bam >> work/bcbiotx/tmpJS4s1r/dup_metrics.txt

Step 20. Samtools stats on sorted BAM

samtools stats -@ 16 work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam > /work/bcbiotx/tmpUPSiOz/SW872_CAMTA1_rep1.txt

Step 21. Samtools idxstats on sorted BAM

samtools idxstats work/align/SW872_CAMTA1_rep1/SW872_CAMTA1_rep1-sort.bam > work/bcbiotx/tmpSKFNZQ/SW872_CAMTA1_rep1-idxstats.txt

Calculate % mitochondrial for mouse scRNA-seq

Seurat is a popular R/Bioconductor package for working with single-cell RNA-seq data. As part of the very first steps of filtering and quality-controlling scRNA-seq data in Seurat, you calculate the % mitochondrial gene expression in each cell, and filter out cells above a threshold. The tutorial provides the following code for doing this in human cells:

 
mito.genes = grep(pattern = "^MT-", x = rownames(x = pbmc@data), value = TRUE)
percent.mito = Matrix::colSums(pbmc@raw.data[mito.genes, ])/Matrix::colSums(pbmc@raw.data)


pbmc = AddMetaData(object = pbmc, metadata = percent.mito, col.name = "percent.mito")
VlnPlot(object = pbmc, features.plot = c("nGene", "nUMI", "percent.mito"), nCol = 3)

Creating a catalog of mitochondrial genes by searching with ‘grep’ for any gene names that start with “MT-” works just fine for the human reference transcriptome. Unfortunately, it doesn’t work for mouse (at least for mm10, which is the reference assembly I’m working with). There are two workarounds for this, in my opinion.

The easiest is to change the regular expression in the “grep” command from “^MT-” to “^mt-” since a search through the mm10 reference (version 3.0.0) in the cellranger reference files reveals that for whatever reason, the MT genes are labeled with lowercase ‘mt’ instead.

A second, and perhaps more thorough, approach is to take advantage of the Broad Institute’s “Mouse Mitocarta 2.0” encyclopedia of mitochondrial genes (note that you could do this same procedure for human MT genes too).

By creating a list of the top 100-200 genes with the strongest evidence for MT expression, it seems likely that you more accurately capture true mitochondrial gene expression. Below is some code to use the “MitoCarta 2.0” (downloaded as a CSV file) for this procedure. You will need to import “tidyverse” to work with tibbles:

library(tidyverse)
library(seurat)

mouse_mito = as.tibble(read.csv("Mouse.MitoCarta2.0_page2.csv", header = TRUE))
mouse_mito = mouse_mito %>% select(c(Symbol, MCARTA2.0_score)) %>% slice(1:100)
mito.genes = as.character(mouse_mito$Symbol)
mito.genes = mito.genes[mito.genes %in% rownames(sample2@raw.data)]

percent.mito = Matrix::colSums(sample2@raw.data[mito.genes,]) / Matrix::colSums(sample2@raw.data)

Gene expression boxplots with ggplot2

The ubiquitous RNAseq analysis package, DESeq2, is a very useful and convenient way to conduct DE gene analyses.  However, it lacks some useful plotting tools.   For example, there is no convenience function in the library for making nice-looking boxplots from normalized gene expression data.

There are other packages one can rely on, for example ‘pcaExplorer’, but I like a simple approach sometimes to plot just a couple of genes.  So below I show you how to quickly plot your favorite gene using only ggplot2 (there is no “one weird trick” though…):

traf1_counts <- counts(ddsTxi['ENSG00000056558',], normalized = TRUE)
m <- list(counts = as.numeric(traf1_counts), group = as.factor(samples$group))
m <- as.tibble(m)
q <- ggplot(m, aes(group, counts)) + geom_boxplot() + geom_jitter(width = 0.1)
q <- q + labs(x = "Experimental Group", y = "Normalized Counts ", title = "Normalized Expression of TRAF1")
q <- q + scale_x_discrete(labels=c("00hn" = "PMN, 0hrs", "06hn" = "PMN, 6hrs",
                                   "06hy" = "PMN+Hp, 6hrs", "24hn" = "PMN, 24hrs", "24hy" = "PMN+Hp, 24hrs"))
q

As you can see above, first we must grab the normalized counts at the row corresponding with the Traf1 Ensembl ID using the ‘counts‘ function that operates on the ‘ddsTxi’ DESeqDataSet object.

In order to create a dataframe (well, a tibble to be specific) for plotting, we first create a list (‘m’) that combines the counts (as a numeric vector) and metadata group.  These two vectors will form the columns of the tibble for plotting, and we must give them names (i.e., “counts” and “group”) so the tibble conversion doesn’t complain.

The list, m, is then converted to a tibble with ‘as.tibble‘ and plotted with ggplot2, using an ‘aes(group,counts)‘ aesthetic plus a boxplot aesthetic.  The rest of the code is just modifying axis labels and tickmarks.  The final product looks like this:

Boxplot of normalized Traf1 expression in 5 different conditions (3 replicates each).