A brief look at machine-learning powered literature search

Machine-learning (ML) and neural networks are transforming data science and life sciences. They are being applied to deal with the challenges of making sense of piles of ‘big data’ that are growing bigger all the time.

Now, these same tools are now being applied to searching the gigantic scientific literature databases (PubMed contains > 30M citations) in order to bring more relevant results to researchers.

A simple PubMed search proceeds by matching terms like the following:

…if you enter child rearing in the search box, PubMed will translate this search to: “child rearing”[MeSH Terms] OR (“child”[All Fields] AND “rearing”[All Fields]) OR “child rearing”[All Fields]

https://www.ncbi.nlm.nih.gov/books/NBK3827/

If you want to get potentially more sophisticated than simply searching on matching terms, like PubMed, take a look at the methods below. Without having used each one extensively, it’s difficult for me to tell if the results are an improvement on PubMed or Google, but let’s just jump in an explore each one briefly:

Semantic Scholar

First up is Semantic Scholar. According to the “about me” page, SS is aimed at helping researchers find relevant publications faster. It analyzes whole documents and extracts meaningful features using various types of ML. The authors claim that this method results in finding influential citations, key images and phrases, and allows the researcher to focus on impactful publications first. They claim to index 176M articles, and have filters for high-quality publications. Detail about this are scarce however.

A search results page from Semantic Scholar search for “single-cell RNA-seq”

The search results appear to have some nice features. Above is a screencap of the results for a “single-cell RNA-seq” search. In the image below, you can see that beneath each paper title and abstract are a couple of numbers in orange. The number on the left is the number of “highly influential citations.” This is the number of papers where this paper played an important role in the citing paper. The second number on the right is the “citation velocity” which represents the average number of citations per year for that work. Then there are several more useful buttons, including a link out, a button that brings up the citation in a variety of formats, a “save” button, and a button to add the paper to my Paperpile library.

Clicking through on one paper yields a page that looks like this:

A results page from Semantic Scholar. Key figures are pulled out and highlighted for quick viewing. Key topics covered in the work are shown on the right.

This nice, clean interface makes it easy to absorb the content of the paper, including browsing the abstract and key figures. You also have a metrics box in the upper right that shows how many times the paper is cited, how many are “highly influenced”, and where in the citing papers this paper is referenced. The headings across the middle of the results page break down the sections that are below. These include “Figures and Topics”, “Media Mentions” where SS finds blog posts and online reports that mention this topic, “Citations” which is a list of the citing papers, “References” which is the papers referenced by this paper, and “Similar Papers” which are papers that cover related topics.

Iris.ai

Iris.ai is machine-learning tool that uses neural networks to build knowledge graphs about publications. The “about me” section includes a cutesy intro in the first person, as if the algorithm were just a really smart person reading a lot of papers and not a research project. Anyway, Iris claims to have “read” at least 77M papers in the core database. There is a good article here detailing the evolution of Iris since her founding in 2016. And the Iris.AI blog is a good place to learn of updates to the method.

When you perform a search with Iris.ai the interface looks like this:

Above. The search interface for Iris.ai.

This looks like a standard search bar, but instead of searching keywords you either input a URL of a paper you are interested in, or you write a title and 300 word paragraph describing a research problem. So there is some work on the front end to get to useful results, but possibly worth it if you need to deep dive into the literature. Let’s take a look at those results below.

Above: Search results for the paper “CNVkit: Genome-Wide Copy Number Detection and Visualization from Targeted DNA Sequencing.”

OK, this is wild. I’ve never seen a search result like this “map” of the knowledge that results from searching a paper. In this case, I searched the “CNVkit” paper. Each “cell” in this map can be zoomed in on, revealing sub-categories that further break down the knowledge and context of the papers. Below that are the actual papers themselves.

Here I’ve zoomed in on “Target” cell and then “Re-sequencing” cell. Now I’m down to individual papers that make up this “cell.”

I hope you’ve enjoyed this brief tour through some advanced ML-powered literature searching tools. I am going to make an effort to incorporate these into my own work with literature searching and see what difference it makes (maybe a subject for a future post).

New “10 simple rules” paper for bioinformatics collaborations coming soon…

Excited to announce that I’ve been working with some fantastic computational biologists (that I met at ISMB2018) on a “10 simple rules” style paper for creating and promoting effective bioinformatics collaborations with wet-lab biologists.  We will leverage our many years of combined bioinformatics core experience to create these “10 simple rules.”

We will touch on:

–experiment planning and design.

–data management plans, data QC, and record keeping.

–avoiding batch effects and contamination.

–managing expectations and developing clear communications.

–handling low-quality data when things do go wrong.

Hope to have this out by the end of 2018…watch this space!

Matataki, a new ultrafast read mapper: hope or hype?

tl;dr:  It’s only 2X faster than kallisto; a marginal, rather than “ultra” improvement.  If you’re already using kallisto in your pipelines there’s no reason to switch.   Now onto the blog post: 

Another RNA-seq read aligner

In July, a paper appeared in BMC Bioinformatics: “Matataki: an ultrafast mRNA quantification method for large-scale reanalysis of RNA-Seq data.”   The title seems to imply that Matataki is much faster than anything else out there.  After all, what does ‘ultrafast’ mean unless it’s in comparison to the next fastest method?  The authors claim it also solves a unique problem of large-scale RNA-seq reanalysis that can’t be solved any other way.  Is this true?  Let’s dive into the details and find out.

The authors point out in the introduction that the amount of publicly available RNA-seq data is increasing rapidly and that many would like to do large-scale reanalysis of these datasets.   Reanalysis (including realignment) is necessary because of the large batch effects of using different methods for alignment and quantification.

Although these alignment-free methods such as Sailfish, Kallisto, and RNA-Skim are much faster than the alignment-based methods, the recent accumulation of large-scale sequence data requires development of an even faster method for data management and reanalysis.

Counting unique k-mers

The method itself eschews traditional alignment for a k-mer based read counting approach.  This approach takes two steps:  build an index and quantify the reads against it.  Building the index requires the algorithm to search for all k-mers unique to a gene and also present in all isoforms of that gene (to avoid effects of differential expression of isoforms).  Quantification proceeds by matching the k-mers from a read with the unique k-mers in the index.  If there is only one indexed gene match from several k-mers (with a minimum cutoff set by the user) that read is assigned to the indexed gene.  If there is more than one match to several genes, the read is discarded.

If this sounds a lot like how kallisto operates under the hood, I would agree with you as would the authors:

Similar to Kallisto, our method uses k-mers that appear only once in a gene and quantifies expression from the number of unique k-mers.

Of course, kallisto creates a transcriptome de bruijn graph (tDBG) and uses compatibility classes to decide whether to skip k-mers while hashing a read.  Matataki does not use a tDBG and simply skips along a read using a fixed interval set by the user.   The code also skips the Expectation-Maximization (EM) step found in kallisto and focuses only on gene-level quantification.   These two novel approaches are what set Matataki apart from other pseudo-alignment algorithms, according to the authors.

But does it work?

Well, it appears to.  The authors start by reporting the performance of the method on simulated data using optimized parameters (I’ll discuss this below).  In Figure 2,  they describe the parameter tuning that led to those choices.  For parameter tuning the performance of Matataki on real data, the TPM values from an eXpress run on sample ERR188125 were taken as “ground truth.”  The choice of ‘eXpress’ as their benchmark for real-world dataset quantification is odd, given that eXpress is defunct and has been for some time.  It also takes orders of magnitude longer to complete than a method like sailfish or kallisto.

Fig 2. Comparison of TPM when k was varied. The x-axis shows the TPM values of eXpress, the y-axis shows the TPM values of our method, and the color indicates the indexed k-mer coverage of each gene when changing k from 16 to 56 with a step of 8

Going back to Figure 1 (below) and the performance on simulated data, we can observe accuracy similar to that of kallisto, RSEM, sailfish and other methods  (note that by using a scale from 0.93 to 0.99, the differences between the methods seem exaggerated).  The one simulated sample was created by RSEM from the real sample plus three other related samples.  So perhaps it’s not surprising that RSEM does the best in this comparison overall.

I think it’s important to remember as well that Matataki is being run on this simulated data with optimized parameters for this sample while the other methods are run with “default” parameters.   We don’t know how Figure 1a and 1b would change for other samples, other organisms, and other sequencing types (PE vs. SE, long vs. short, etc…)  It would be enlightening to find out.

Figure 1. Summary of the results using simulation data. a Spearman correlation coefficient with the expected expression and estimated expression values using each method. “Matataki” indicates the results of the proposed method, and “MatatakiSubset” indicates the results of the proposed method without uncovered genes. To compare the gene-level expression profile and transcript-level expression profile, the sum of TPM by each gene was calculated. b Means of absolute difference from the expected expression levels.

So how fast is ‘ultrafast’ anyway?

We finally get to the promised speed improvements near the end of the manuscript, only to find that the speedup relative to kallisto, a method that has been available publicly since at least 2015, is only 2-fold.  Yep, that’s it.  It’s only twice as fast.   This is an incremental improvement, but not a paradigm shift.  In fact, it shows how the leap from alignment to pseudo-alignment was in fact a paradigm shift when kallisto first came out.  It’s been three years and nothing has substantially improved on it.

Fig 3. CPU time comparison.

Limitations?

Finally, the authors end the paper with a section on ‘expected use-cases and limitations’ where they say that the method is only suitable for large-scale reanalysis and not for normal, turn-the-crank RNA-seq.

Nevertheless, Matataki is not suitable for common RNA-Seq purposes because other methods are sufficiently fast and provide better accuracy. For example, a single nucleotide substitution has larger effects in Matataki than in other methods, because even a single point substitution changes the k-mer for 2 k − 1 bases, which ultimately affects the number of k-mers in a transcript and calculation of the TPM value.

For reasons that I don’t understand, the authors promote this tool (which is only 2X faster than kallisto) as being for large-scale reanalysis, yet they performed no large-scale reanalysis to benchmark the tool.   Rather, they used it to perform normal RNA-seq as a benchmark, which they go on to say not to do in the wild.

To sum up, while a 2X speedup is useful, I see no reason to abandon kallisto to pseudoalign and quantify data when doing large scale reanalysis.   Unlike Matataki, kallisto is robust to read errors and it delivers transcript-level estimates that can be later summed to gene level, if you so desire.  It also provides bootstrapped estimates of technical variation, which helps to understand the biological variation as well.   And finally, kallisto integrates nicely with sleuth for DE testing and visualization.

Decode SAM files with these handy references

I recently had to inspect some genomic alignments as part of a project.  Usually, I am just working with BAM files and if inspection is needed, I just visualize the pileups to see what is going on.

In this case, I just wanted a quick answer to how the reads were aligning to the reference, and I didn’t want to go through the process of subsetting and copying the BAM files to my local machine.

The SAM file is the uncompressed record of the read alignments produced by an aligner method (STAR, TopHat, BWA, etc….).   This file can get very large, and so is usually compressed into BAM (faster for machine parsing, but not human readable) and the SAM file is discarded.

In my case, I still had the SAM files around to inspect.  If you find yourself needing to read a SAM file, here are three helpful reference tools to make the process less painful:

1)  This page has an enormous amount of detail about SAM files including this helpful chart that enumerates all of the fields that you can expect to find specified within each alignment:

SAM file structure explained in this handy chart.

 

2) This post from the blog “zenfractal.com” contains a great exposition on CIGAR strings and how to decode them:

3)  And finally, if you’re trying to decode the SAM bitwise flags, you can calculate them using this tool from the Broad Institute:

Decode SAM flags with this handy online tool from the Broad Institute.

 

 

Get hands-on with t-SNE plots

With the growing popularity of single-cell RNA-Seq analysis, the t-SNE projection of multi-dimensional data is appearing more often in publications and online.  If you’ve ever wanted to develop a better intuitive feel for what exactly t-SNE does and where it can go wrong, this interactive tutorial (by Martin Wattenberg and Fernanda Viegas) is extremely compelling and useful.

A screen capture of the interactive t-SNE interface.

In addition to providing a wonderful, interactive plotting function, the authors go on to provide an informative tutorial explains the pitfalls and challenges of the optimization and hyper-parameter tuning of t-SNE projections and how to get the most from the plots.  Here is an example:

An example of how hyperparameter tuning affects the final plot.

In the example above, tuning the “perplexity” of the t-SNE projection causes the correct reconstruction of the data when values are between 30-50, but the same method fails when the parameter falls outside those ranges (i.e., too small or too large).

Go check out this distill.pub site.  It’s worth your time.