ATAC-seq best practices (tips)

Figure 3. Signal features generated by different methods for profiling chromatin accessibility.

A few best practices for ATAC-seq assays are suggested as follows:

  • Digest away background DNA (medium/dead cells) using DNase I22
  • Use fresh/cyropreserved cells/tissues to isolate nuclei7,9
  • Reduce mitochondrial/chloroplast DNA contamination as much as possible by using the Omni-ATAC protocol or other methods22–25
  • Optimize the ratio of the amount of Tn5 enzyme to the number of nuclei
  • Optimize the number of PCR cycles19
  • Perform Paired-end (PE) sequencing, e.g., 2 x 50 to 100 bp
  • Sequence > 50 M PE reads (~200 M for footprinting analysis)7

A few best practices for ATAC-seq data analysis are suggested as follows:

  • Perform raw read QC using FASTQC before alignment
  • Perform post-alignment QC using ATACSeqQC10
  • Perform peak calling using a peak caller, such as MACS226 in narrowdPeak mode with option settings: “shift -s and extend 2s”, Genrich, or HMMRATAC.27
  • Perform post-peak calling QC
    • Annotate peaks and generate peak distribution among genomic features using ChIPpeakAnno28
    • Obtain functions of genes associated to peaks using the Genomic Regions Enrichment of Annotations Tool (GREAT)29

Copied From: https://haibol2016.github.io/ATACseqQCWorkshop/articles/ATACseqQC_workshop.html

A brief look at machine-learning powered literature search

Machine-learning (ML) and neural networks are transforming data science and life sciences. They are being applied to deal with the challenges of making sense of piles of ‘big data’ that are growing bigger all the time.

Now, these same tools are now being applied to searching the gigantic scientific literature databases (PubMed contains > 30M citations) in order to bring more relevant results to researchers.

A simple PubMed search proceeds by matching terms like the following:

…if you enter child rearing in the search box, PubMed will translate this search to: “child rearing”[MeSH Terms] OR (“child”[All Fields] AND “rearing”[All Fields]) OR “child rearing”[All Fields]

https://www.ncbi.nlm.nih.gov/books/NBK3827/

If you want to get potentially more sophisticated than simply searching on matching terms, like PubMed, take a look at the methods below. Without having used each one extensively, it’s difficult for me to tell if the results are an improvement on PubMed or Google, but let’s just jump in an explore each one briefly:

Semantic Scholar

First up is Semantic Scholar. According to the “about me” page, SS is aimed at helping researchers find relevant publications faster. It analyzes whole documents and extracts meaningful features using various types of ML. The authors claim that this method results in finding influential citations, key images and phrases, and allows the researcher to focus on impactful publications first. They claim to index 176M articles, and have filters for high-quality publications. Detail about this are scarce however.

A search results page from Semantic Scholar search for “single-cell RNA-seq”

The search results appear to have some nice features. Above is a screencap of the results for a “single-cell RNA-seq” search. In the image below, you can see that beneath each paper title and abstract are a couple of numbers in orange. The number on the left is the number of “highly influential citations.” This is the number of papers where this paper played an important role in the citing paper. The second number on the right is the “citation velocity” which represents the average number of citations per year for that work. Then there are several more useful buttons, including a link out, a button that brings up the citation in a variety of formats, a “save” button, and a button to add the paper to my Paperpile library.

Clicking through on one paper yields a page that looks like this:

A results page from Semantic Scholar. Key figures are pulled out and highlighted for quick viewing. Key topics covered in the work are shown on the right.

This nice, clean interface makes it easy to absorb the content of the paper, including browsing the abstract and key figures. You also have a metrics box in the upper right that shows how many times the paper is cited, how many are “highly influenced”, and where in the citing papers this paper is referenced. The headings across the middle of the results page break down the sections that are below. These include “Figures and Topics”, “Media Mentions” where SS finds blog posts and online reports that mention this topic, “Citations” which is a list of the citing papers, “References” which is the papers referenced by this paper, and “Similar Papers” which are papers that cover related topics.

Iris.ai

Iris.ai is machine-learning tool that uses neural networks to build knowledge graphs about publications. The “about me” section includes a cutesy intro in the first person, as if the algorithm were just a really smart person reading a lot of papers and not a research project. Anyway, Iris claims to have “read” at least 77M papers in the core database. There is a good article here detailing the evolution of Iris since her founding in 2016. And the Iris.AI blog is a good place to learn of updates to the method.

When you perform a search with Iris.ai the interface looks like this:

Above. The search interface for Iris.ai.

This looks like a standard search bar, but instead of searching keywords you either input a URL of a paper you are interested in, or you write a title and 300 word paragraph describing a research problem. So there is some work on the front end to get to useful results, but possibly worth it if you need to deep dive into the literature. Let’s take a look at those results below.

Above: Search results for the paper “CNVkit: Genome-Wide Copy Number Detection and Visualization from Targeted DNA Sequencing.”

OK, this is wild. I’ve never seen a search result like this “map” of the knowledge that results from searching a paper. In this case, I searched the “CNVkit” paper. Each “cell” in this map can be zoomed in on, revealing sub-categories that further break down the knowledge and context of the papers. Below that are the actual papers themselves.

Here I’ve zoomed in on “Target” cell and then “Re-sequencing” cell. Now I’m down to individual papers that make up this “cell.”

I hope you’ve enjoyed this brief tour through some advanced ML-powered literature searching tools. I am going to make an effort to incorporate these into my own work with literature searching and see what difference it makes (maybe a subject for a future post).

New job: Director of IIHG Bioinformatics

I’m thrilled to report that I’ve been promoted to the position of Director of our bioinformatics group here at the University of Iowa. We are within the Iowa Institute of Human Genetics (IIHG) and we support clinical activities in the institute, but also a wide array of research collaborations across the University.

I have a lot of goals and ideas for the group and look forward to working to implement those going forward. I may not be able to write posts here as often, but I’ll try to keep up with it. We also have a new twitter account: @iowabioinfo. Please follow us there.

Conference report: GLBIO2019

I just returned from another great experience at Great Lakes Bio 2019 (#GLBIO2019), a regional meeting of the International Society of Computational Biologists (ISCB). Below I’ll summarize briefly a few of the talks that I found most interesting to me personally (there were several parallel tracks, so I did not attend all talks).

Docker workshop taught by Sara Stevens

On Sunday of the conference, I attended a 3-hour workshop introducing Docker technology held in the beautiful and very modern Wisconsin Institutes of Discovery building. The course was taught by Sara Stevens, an expert in data science and bioinformatics with the data science hub at UWisconsin-Madison.

We worked through an initial “hello world” application of Docker on our laptops, writing a Dockerfile that became an image and finally a container instance of that image:

mchiment@MNE762:~/Desktop/docker-playground/my-greeting$cat Dockerfile 
#specify the base image
FROM alpine
#specify what to build
RUN /bin/echo "greeting!" > /root/my_message
#give default command
CMD ["/bin/cat", "/root/my_message"]

Then we progressed into more complex Dockerfile builds, including one that would install a mini-python distro and run a program. This included installing some libraries with pip within the image, and running a script.

Overall, I learned a lot and got a good grasp of the Docker basics to build upon for future work.

Integrative analysis for fine mapping of genetic variants, Sunduz Keles

In this talk, the issue of how to make sense of GWAS data was addressed. If you have a collection of SNPs, how to you follow up with which genes to study, which mechanisms to propose, etc… This talk introduced a tool, atSNP Search, which uses transcription factor position-weight matrices (PWMs) and assesses the impact of a SNP on TF DNA-binding activity within the local area of the SNP using the PWMs.

From the website:

atSNP identifies and quantifies best DNA sequence matches to the transcription factor PWMs with both the reference and the SNP alleles in a small window around the SNP location (up to +/- 30 base pairs and considering subsequences spanning the SNP position). It evaluates statistical significance of the match scores with each allele and calculates statistical significance of the score difference between the best matches with the reference and SNP alleles.

The talk also introduced a method, “FM-HighLD”, which asks whether you can substitute functional annotations of SNPs for “massive parallel reporter arrays” (MPRAs) which are considered “gold standard” for SNP/eQTL function. The idea is to use MPRA results and their correlation to functional annotations to calibrate the model and then apply that to eQTLs or GWAS SNPs with no MPRA results, but functional annotations from public databases.

refine.bio

There is over $4 Billion worth of publicly-funded RNAseq and microarray data in the public repositories. Studies have shown that analysts can spend up to 30% of a project’s time just searching, accessing, downloading, and preprocessing these data.

Refine.bio is an attempt to “harmonize” thousands of gene expression datasets by downloading and pre-processing them using a common pipeline and common reference. This is only possible owing to the innovation of pseudo-alignment in methods like kallisto and salmon.

In the background, refine.bio runs on Amazon Web Services, which gives the project unlimited compute and storage to scale according to their needs. In addition to standardized gene expression processing, sample metadata are also harmonized, where keywords are mapped to standard ontologies for ease of comparison.

Monitoring crude oil spills with 16S and machine-learning, Stephen Techtmann

In this work, Dr. Techtmann’s group was interested in looking at the response of fresh water microbiomes drawn from Lake Superior to the introduction of different types of oil (a complex chemical substance that acts as a carbon food source). Their team drew lake water samples and incubated them with different oils (heavy crude, refined crude, etc…) and then assessed taxonomic abundance using 16S amplicon gene sequencing.

The taxa abundances were used to train a Random Forest model to predict oil contamination status. RF methods produced a model with extremely high accuracy, AUC > 0.9. They found that two taxa predominantly distinguish the oil samples from the lake water samples.