MixOmics: a “swiss army knife” for -omics integration

Introduction

Genomics is the study of an organism’s complete set of genetic material, including its DNA sequence, genes, and regulation of gene expression. Other “omics” techniques, such as proteomics and metabolomics, focus on the study of proteins and metabolites, respectively. By analyzing these different types of data together, researchers can generate new insights into the inner workings of an organism and how it responds to its environment.

For example, by combining genomics data with proteomics and metabolomics data, researchers can gain a more complete understanding of an organism’s gene expression, protein production, and metabolic processes, and how these processes work together to create health, or dysfunction to create disease. This knowledge can provide valuable insights for a wide range of applications, including drug development, disease diagnosis, and environmental monitoring.

Finding correlations between related datasets means looking for patterns or relationships between different sets of data. This can provide valuable insights into the underlying biological processes and functions of an organism. For example, if two datasets show a strong positive correlation, it suggests that they are related in some way and that changes in one dataset may be associated with changes in the other. By identifying these correlations, researchers can better understand the mechanisms behind biological processes and how they are regulated. This can be useful for a variety of applications, such as predicting the effects of potential drugs or identifying potential targets for medical intervention.

I have surveyed the literature for tools to integrate multiple -omics datasets together. As is the case for any task in bioinformatics, there are dozens of options. However, when I considered criteria such as ease of installation, documentation quality, robust user community, user support and published analyses, I believe that the “mixOmics” package (available for download and installation from Bioconductor) is one of the best tools out there for doing this type of integration analysis.

The mixOmics approach

The mixOmics package encompasses many different versions of multivariate algorithms for integrating multiple datasets. Multivariate analysis is well-suited to this problem space where there are far more features than samples. By reducing the dimension of the data, the analysis makes it easier for a human analyst to see patterns and interpret correlations. One of the most common types of algorithm in mixOmics for doing this is called “partial least squares.”

Fig 1. An overview of the mixOmics package. The methods can handle single ‘omics, multiple ‘omics on the same samples (N-integration), and the same ‘omics on multiple sets of samples (P-integration) to find correlations in the data. Some examples of the graphic outputs are shown in the bottom two panels of the figure.

The partial least squares (PLS) method is a mathematical technique used to analyze relationships between two or more datasets. It works by identifying the underlying patterns and correlations in the data, and then using this information to construct a set of “composite” variables that capture the most important features of the data (this is analogous to PCA analysis, but differs by focusing on maximizing correlation/covariance among latent variables).

These composite (latent) variables can then be used to make predictions or draw conclusions about the relationships between the datasets. For example, if two datasets are known to be related in some way, the PLS method can be used to identify the specific features of each dataset that are most strongly correlated with the other, and then construct composite variables based on these features. PLS is more robust than PCA to highly correlated features and can be used to make predictions between the dependent and independent variables.

mixOmics takes the PLS method a step further by integrating a ‘feature-selection’ option called “sparse PLS” or just “sPLS” that uses “lasso” penalization to reduce unnecessary features from the final model to aid interpretation and also computational time. Lasso regression works by adding a regularization term to the ordinary least squares regression model, which is a measure of the complexity of the model. This regularization term, called the “lasso,” forces the coefficients of the model to be zero for the less important predictors, effectively eliminating them from the model.

This results in a simpler and more interpretable model that is better able to make accurate predictions. Lasso regression is particularly useful for datasets with a large number of predictors, as it can help to identify the most important predictors and reduce the risk of overfitting the model.

Conclusion

In future posts, I will describe in more detail what can be done with MixOmics and show some results from our own studies that have produced stunningly detailed and intricate correlation networks. If you are interested in this kind of work, I would encourage you to check out mixOmics as a possible avenue for analysis. There are other packages, and many of them are excellent, but the learning curve of mixOmics is quite shallow and it is well supported with a dynamic and active user community. It is also very flexible to different experimental scenarios, so that you can analyze your data several different ways while using the same package and R script.

Perturbation analysis of spatial single cell RNA-seq with ‘augur’

Spatial single cell RNA-seq data are essentially regular single-cell RNA-seq data that have spatial coordinates associated through localization on a special capture slide. I had previously used so-called “perturbation” analysis successfully with 10X single-cell data and I wanted to apply the technique to spatial single cell to understand how a treatment affects the spatially-resolved clusters.

Here, I want to briefly describe the steps I went through to perform ‘augur’ perturbation analysis of 10X Visium Spatial single cell RNA-seq data. augur works as follows:

Augur is an R package to prioritize cell types involved in the response to an experimental perturbation within high-dimensional single-cell data. The intuition underlying Augur is that cells undergoing a profound response to a given experimental stimulus become more separable, in the space of molecular measurements, than cells that remain unaffected by the stimulus. Augur quantifies this separability by asking how readily the experimental sample labels associated with each cell (e.g., treatment vs. control) can be predicted from molecular measurements alone. This is achieved by training a machine-learning model specific to each cell type, to predict the experimental condition from which each individual cell originated. The accuracy of each cell type-specific classifier is evaluated in cross-validation, providing a quantitative basis for cell type prioritization.

I followed both the Seurat 10X Visium vignette as well as a dataset integration protocol to combine two treatment (a gene knockout, in this case) and control samples (S1 and S2). Normalization was performed by “SCTransform” as recommended for spatial RNA-seq data prior to integration. PCA, K-nearest neighbors, clustering, and uMAP were calculated as described in the Seurat vignette using default values. Cell types were assigned in collaboration with the experimentalists.

With the integrated, clustered and, assigned dataset in hand, I was ready to enter the “augur” workflow as described in the paper, with some minor tweaks. First, because this is spatial and not regular scRNA-seq, there is no “RNA” default assay to set after integration. I chose to set “SCT” as the assay instead, because this represents the normalized and scaled dataset which is what you want for input to an ML model.

```{r, celltype_priority}

library(Augur)
DefaultAssay(s1s2.int) <- "SCT"
augur <- Augur::calculate_auc(s1s2.int, label_col = "orig.ident", cell_type_col = "cell_type", 
                              n_threads = 6, 
                              rf_params = list(trees = 15, mtry = 2, min_n = NULL, importance = "accuracy"),
                              n_subsamples = 25,
                              )
```

Above, you can see the actual call to augur “calculate_auc” method. I found that by specifying ‘rf_params’ and reducing the number of trees, I got better separation between cell types in the AUC readout. The calculation takes about 20 minutes to run on a 2018 MacBook Pro 13 inch laptop.

When the algorithm completes, you can visualize your results. Using the vignette for regular scRNA-seq you can do this:

library(patchwork)
p1 <- plot_umap(augur, s1s2.int, mode = "default", palette = "Spectral")
p1 <- p1 + geom_point(size=0.1) + ggtitle("Augur Perturbation by Type (Red = Most)")
p2 <- DimPlot(s1s2.int, reduction = "umap", group.by = "cell_type") + ggtitle("S1/S2 Integrated Cell Types")
p1 + p2 

The resulting plot looks like this:

Augur perturbation analysis by AUC (red is more perturbed; left) and UMAP plot of cell types (right).

This is great and helpful, but it doesn’t take advantage of the spatially resolved nature of the data. To do that, you have to modify the integrated seurat object with the augur results:

### Make a dataframe of AUC results 
auc_tab <- augur$AUC
auc_tab$rank <- c(1:9)

### Grab the cells by type and barcode 
tib <- s1s2.int$cell_type %>% as_tibble(rownames = "Barcode") %>% rename(cell_type=value)

### Join the AUC information to the barcode on cell_type 
tib <- tib %>% left_join(., auc_tab)

### Sanity check 
assertthat::are_equal(colnames(s1s2.int), tib$Barcode)

### Update the seurat object with new augur metadata 
s1s2.int$AUC <- round(tib$auc, 3) 
s1s2.int$RANK <- tib$rank

Here, I am simply pulling out the AUC results into a table by cell type. Then I get the cell type information from the seurat object and merge the AUC information into it. I just set new metadata on the seurat object to transfer information about AUC and Rank for each barcode (i.e., cell). I do a sanity check to make sure the barcodes match (they do, as expected).

Now you can plot the spatially resolved AUC information:

SpatialDimPlot(s1s2.int, group.by = "AUC", cols = rev(c("#D73027", "#F46D43", "#FDAE61", "#FEE090", "#FFFFBF", "#E0F3F8", "#ABD9E9", "#74ADD1", "#4575B4")))

This takes advantage of the “group.by” flag in the Spatial Dim Plots command to use the AUC metadata. I’m also using a custom color scheme from ColorBrewer that shades the cell types from low to high AUC along a rainbow for ease of viewing. The plot looks like this:

Spatially-resolved perturbation (AUC) of cell clusters in the WT (left) and knockout (right) samples.

A brief look at machine-learning powered literature search

Machine-learning (ML) and neural networks are transforming data science and life sciences. They are being applied to deal with the challenges of making sense of piles of ‘big data’ that are growing bigger all the time.

Now, these same tools are now being applied to searching the gigantic scientific literature databases (PubMed contains > 30M citations) in order to bring more relevant results to researchers.

A simple PubMed search proceeds by matching terms like the following:

…if you enter child rearing in the search box, PubMed will translate this search to: “child rearing”[MeSH Terms] OR (“child”[All Fields] AND “rearing”[All Fields]) OR “child rearing”[All Fields]

https://www.ncbi.nlm.nih.gov/books/NBK3827/

If you want to get potentially more sophisticated than simply searching on matching terms, like PubMed, take a look at the methods below. Without having used each one extensively, it’s difficult for me to tell if the results are an improvement on PubMed or Google, but let’s just jump in an explore each one briefly:

Semantic Scholar

First up is Semantic Scholar. According to the “about me” page, SS is aimed at helping researchers find relevant publications faster. It analyzes whole documents and extracts meaningful features using various types of ML. The authors claim that this method results in finding influential citations, key images and phrases, and allows the researcher to focus on impactful publications first. They claim to index 176M articles, and have filters for high-quality publications. Detail about this are scarce however.

A search results page from Semantic Scholar search for “single-cell RNA-seq”

The search results appear to have some nice features. Above is a screencap of the results for a “single-cell RNA-seq” search. In the image below, you can see that beneath each paper title and abstract are a couple of numbers in orange. The number on the left is the number of “highly influential citations.” This is the number of papers where this paper played an important role in the citing paper. The second number on the right is the “citation velocity” which represents the average number of citations per year for that work. Then there are several more useful buttons, including a link out, a button that brings up the citation in a variety of formats, a “save” button, and a button to add the paper to my Paperpile library.

Clicking through on one paper yields a page that looks like this:

A results page from Semantic Scholar. Key figures are pulled out and highlighted for quick viewing. Key topics covered in the work are shown on the right.

This nice, clean interface makes it easy to absorb the content of the paper, including browsing the abstract and key figures. You also have a metrics box in the upper right that shows how many times the paper is cited, how many are “highly influenced”, and where in the citing papers this paper is referenced. The headings across the middle of the results page break down the sections that are below. These include “Figures and Topics”, “Media Mentions” where SS finds blog posts and online reports that mention this topic, “Citations” which is a list of the citing papers, “References” which is the papers referenced by this paper, and “Similar Papers” which are papers that cover related topics.

Iris.ai

Iris.ai is machine-learning tool that uses neural networks to build knowledge graphs about publications. The “about me” section includes a cutesy intro in the first person, as if the algorithm were just a really smart person reading a lot of papers and not a research project. Anyway, Iris claims to have “read” at least 77M papers in the core database. There is a good article here detailing the evolution of Iris since her founding in 2016. And the Iris.AI blog is a good place to learn of updates to the method.

When you perform a search with Iris.ai the interface looks like this:

Above. The search interface for Iris.ai.

This looks like a standard search bar, but instead of searching keywords you either input a URL of a paper you are interested in, or you write a title and 300 word paragraph describing a research problem. So there is some work on the front end to get to useful results, but possibly worth it if you need to deep dive into the literature. Let’s take a look at those results below.

Above: Search results for the paper “CNVkit: Genome-Wide Copy Number Detection and Visualization from Targeted DNA Sequencing.”

OK, this is wild. I’ve never seen a search result like this “map” of the knowledge that results from searching a paper. In this case, I searched the “CNVkit” paper. Each “cell” in this map can be zoomed in on, revealing sub-categories that further break down the knowledge and context of the papers. Below that are the actual papers themselves.

Here I’ve zoomed in on “Target” cell and then “Re-sequencing” cell. Now I’m down to individual papers that make up this “cell.”

I hope you’ve enjoyed this brief tour through some advanced ML-powered literature searching tools. I am going to make an effort to incorporate these into my own work with literature searching and see what difference it makes (maybe a subject for a future post).