Conference report: ISMB 2018 Chicago

In early July, I attended the ISMB 2018 meeting, a computational biology-focused meeting held by the International Society for Computational Biology (ISCB).  The meeting was held in the beautiful Hyatt Regency hotel in downtown Chicago, just across the street from the river and blocks from Navy Pier and Lakeshore Drive.

ISMB 2018 was a huge meeting, with at least 1500 attendees and up to ten parallel meeting tracks (called “COSIs” in ISCB parlance) at any one time.  The meeting was so big I always felt as if I was missing something good, no matter which talk I went to (except for the keynotes, where nothing else was happening).

Here is Steven Salzberg’s excellent keynote on a historical overview of finding genes in the human genome:

Obviously, at a meeting this large, one cannot explore more than a tiny fraction of the talks and posters (but you can watch many of the ISMB2018 talks on youtube now if you’re interested).

I want to briefly summarize three talks that I particularly enjoyed and found very interesting:

1) Michael Seiler, H3 biomedicine.  “Selective small molecule modulation of splicing in cancer”

Up to 60% of hematologic neoplasms (CLL/AML/MDS) contain heterozygous hotspot mutations in key splicing factor genes involved with 3′ splice site recognition.   One such gene is SF3B1, an RNA splicing factor, which is recurrently mutated around the HEAT repeats.  The cryo-EM structure of SF3B1 revealed the mutations cluster in the premRNA-interacting region.   All of the observed mutations lead to alternate 3′ splice sites being chosen during splicing; so-called “cryptic 3′ splice sites (AG)”. In particular, the K700E mutant slips the spliceosome upstream to an internal AG almost 18 nucleotides from the proper splice site.

RNA-seq of 200 CLL patients helped to discover further in-frame delections in SF3B1, SRSF2, and URAF1.   All of these mutations are HET, and the cells rely on the WT copy for survival.

Seiler asked the question whether one could exploit this weakness to attack cancer cells?  The idea is to modulate activity of SF3B1 using small molecules to disturb the WT function.  At H3 they took a natural product compound library and optimized for chemistry and binding to find candidates.  One molecule, H3B-8800, showed promise.  The compound was optimized toward selective induction of apoptosis in SF3B1 mutant cells in vitro. When resistance mutations occur, they are all at points of contact with the drug.  At only 13 nM dose, around 1% of splicing events were affected which was enough to cause lethality to the cancer cell.   RNA-seq demonstrated that it was mainly the spliceosome genes themselves that were affected by the alternate splicing in the presence of the inhibitor.  This demonstrated a delicate feedback loop between correct splicing and spliceosome gene expression.

2) Curtis Huttenhower,  Harvard University.  “Methods for multi-omics in microbial community population studies”

Huttenhower’s group at Harvard is well known for their contributions to methods for microbiome analysis, collectively known as the “biobakery.”  In this talk, Huttenhower addressed the fact that the microbiome is increasingly of interest when looking at population heath, and that chronic immune disease incidence is rising around the world over the last few decades.

Huttenhower also spent a good amount of time describing the IBD-MDB, the inflammatory bowel disease multi ‘omics database.  The database contains ~200 samples that are complete with six orthogonal datatypes, including RNA-seq, metagenome, and metabolome.   He talked about how they can associate bacteria enriched in IBD with covariation in the prevalence of metabolites.   For example, he showed how sphingolipids, carboximidic acids, cholesteryl esters and more are enriched while lactones, beta-diketones, and others are depleted in the guts of crohns disease suffers.

3) Olga Troyanskaya, Princeton University.  “ML approaches for the data-driven study of human disease”

Troyanskaya’s group at Princeton aims to develop accurate models of the complex processes underlying cellular function.  In this talk, she described efforts in her lab to use machine learning methods to understand how single nucleotide changes (SNPs) in non-coding regions can affect gene regulation and expression.  She is also interested in how pathways and networks change in different tissues and whether tissue-specific maps can be used to identify disease genes.

In particular, Dr. Troyanskaya described the “DeepSEA” software, which uses a convolution neural network (CNN) to attempt to predict the chromatin remodeling consequences of a SNP in a genomic context.   The model is a three layer CNN that takes 1000 base pairs of sequence as input.  The 1000 bp region is centered around known TF binding locations (200 bp bins).  The training data consists of a length 919 vector that contains binary values for the presence of a TF binding event across the genome (1 = binds, 0 = does not bind).  The output of the model is the probability that the specific sequence variant will affect TF binding at each of the 919 bins.

Schematic of the DeepSEA model, showing input data, training data, and output.

The DeepSEA model can be used to predict important sequence variants and even eQTLs for further study or to prioritize a list of known non-coding sequence variants.

 

 

 

Developers versus consumers of bioinformatics analysis tools

Life in the middle

As a bioinformatics applications scientist, I work in a middle ground between those who develop code for analyzing next-gen sequencing data and those who consume that analysis.   The developers are often people trained in computer science, mathematics, and statistics.   The consumers are often people trained in biology and medicine.    There is some overlap, of course, but if you’ll allow a broad generalization, I think the two groups (developers and consumers) are separated by large cultural differences in science.

Ensemble focus vs. single-gene focus

I think one of the biggest differences from my experience is the approach to conceptualizing the results of a next-gen seq experiment (e.g., RNA-seq).    People on the methods side tend to think in terms of ensembles and distributions.   They are interested in how the variation observed across all 30,000 genes can be modeled and used to estimate differential expression.   They are interested in concepts like shrinkage estimators, bayesian priors, and hypothesis weighting.  The table of differentially expressed features is thought to have meaning mainly in the statistical analysis of pathway enrichment (another ensemble).

Conversely, biologists have a drastically different view.   Often they care about a single gene or a handful of genes and how those genes vary across conditions of interest in the system that they are studying.  This is a rational response to the complexity of biological systems; no one can keep the workings of hundreds of genes in mind.  In order to make progress, you must focus.  However, this narrow focus leads investigators to sometimes cherry pick results pertaining to their ‘pet’ genes.  This can invest those results with more meaning than is warranted from a single experiment.

The gene-focus of biologists also leads to clashes with the ensemble-focus of bioinformatics software.  For example, major DE analysis packages like DESeq2 do not have convenience functions to make volcano plots, even though I’ve found that those kinds of plots are the most useful and easiest to understand for biologists.   Sleuth does have a volcano plotting function, but doesn’t allow for labeling genes.  In my experience, however, biologists want a high-res figure with common name gene labels (not ensemble transcript IDs) that they can circle and consider for wet lab validation.

New software to address the divide?

I am hopeful about the recent release of the new “bcbioRNASeq” package from Harvard Chan school bioinformatics (developers of ‘bcbio’ RNA-seq pipeline software).   It appears to be a step towards making it easier for people like me to walk on both sides of this cultural divide.    The package takes all of the outputs of the ‘bcbio’ pipeline and transforms them into an accessible S4 object that can be operated on quickly and simply within R.

Most importantly, the ‘bcbioRNASeq’ module allows for improved graphics and plotting that make communication with biologists easier.  For example, the new MA and volcano plots appear to be based on ‘ggplot2’ graphics and are quite pretty (and have text label options(!), not shown here):

I still need to become familiar with the ‘bcbioRNASeq’ package, but it looks quite promising.   ‘pcaExplorer‘ is another R/Bioconductor package that I feel does a great job of making accessible RNA-seq reports quickly and easy.  I suspect we will see continued improvements to make plots prettier and more informative, with an increasing emphasis on interactive, online plots and notebooks rather than static images.

Breakthrough advances in 2018 so far: flu, germs, and cancer

2018 medicine breakthrough review!

So far this year has seen some pretty important research breakthrough advances in several key areas of health and medicine.  I want to briefly describe some of what we’ve seen in just the first few months of 2018.

Flu

A pharmaceutical company in Japan has released phase 3 trial results showing that its drug, Xofluza, can effectively kill the virus in just 24 hours in infected humans.  And it can do this with just one single dose, compared to a 10-dose, three day regimen of Tamiflu. The drug works by inhibiting an endonuclease needed for replication of the virus.

Germs

It is common knowledge that antibiotics are over-prescribed and over-used.  This fact has led to the rise of MRSA and other resistant bacteria which threaten human health.  Although it is thought that bacteria could be a source of novel antibiotics since they are in constant chemical warfare with each other, most bacteria aren’t culture-friendly in the lab and so researchers haven’t been looking at them for leads.  Until now.

Malacidin drugs kill multi-drug resistant S. Aureus in tests on rats.

By adopting whole genome sequencing approaches to soil bacterial diversity, researchers were able to screen for gene clusters associated with calcium-binding motifs known for antibiotic activity.   The result was the discovery of a novel class of lipo-peptides, called malacidins A and B.  They showed potent activity against MRSA in skin infection models in rats.

The researchers estimate that 99% of bacterial natural-product antibiotic compounds remain unexplored at present.

Cancer

2017 and 2018 have seen some major advances with cancer treatment.   It seems that the field is moving away from the focus on small-molecule drugs towards harnessing the patient’s own immune system to attack cancer.  The CAR-T therapies for pediatric leukemia appear extremely promising.  These kinds of therapies are now in trials for a wide range of blood and solid tumors.

A great summary of the advances being made is available here from the Fred Hutchinson Cancer Research Center.   Here is how Dr. Gilliland, President of Fred Hutch, begins his review of the advances:

I’ve gone on record to say that by 2025, cancer researchers will have developed curative therapeutic approaches for most if not all cancers.

I took some flak for putting that stake in the ground. But we in the cancer research field are making incredible strides toward better and safer, potentially curative treatments for cancer, and I’m excited for what’s next. I believe that we must set a high bar, execute and implement — that there should be no excuses for not advancing the field at that pace.

This is a stunning statement on its own;  but made even more so because it is usually the scientists in the day-to-day trenches of research who are themselves the most pessimistic about the possibility of rapid advances.

Additionally, an important paper came out recently proposing a novel paradigm for understanding and modeling cancer incidence with age.  For a long time the dominant model has been the “two-hit” hypothesis which predicts that clinically-observable cancers arise when a cell acquires sufficient mutations in tumor-suppressor genes to become a tumor.

This paper challenges that notion and shows that a model of thymic function decline (the thymus produces T-cells) over time better describes the incidence of cancers with age.   This model better fits the data and leads to the conclusion that cancers are continually arising in our bodies, but it is our properly functioning immune system that roots them out and prevents clinical disease from emerging.  This model also helps explain why novel cancer immunotherapies are so potent and why focus has shifted to supporting and activating T-cells.

Declining T cell production leads to increasing disease incidence with age.

 

Are deep neural nets “Software 2.0”?

Image from: https://cdn.edureka.co/blog/wp-content/uploads/2017/05/Deep-Neural-Network-What-is-Deep-Learning-Edureka.png

Recent blog posts by Andrej Karpathy at Medium.com and Pete Warden at PeteWarden.com have caused a paradigm shift in the way I think about neural nets.  Instead of thinking of them as powerful machine learning tools, the authors  instead suggest that we should think of neural nets, and in particular, convolution deep nets, as ‘self-writing programs.’   Hence the term, “Software 2.0.”

It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.   — Andrej Karpathy, Medium.com

I found this to be a dramatic reversal in my thinking about these techniques, but it opens up a deeper understanding and is much more intuitive.  The fact is that combinations of artificial neurons can be used to model any logical operation.  Therefore you can conceptualize training a neural net as searching programming space for an optimal program that behaves in the way you specify.  You provide the inputs and desired outputs, and the model searches for the optimal program.

This stands in contrast to the “Software 1.0” paradigm where the programmer uses her skill and experience to conceptualize the right combination of specific instructions to produce the desired behavior.   While it seems certain that Software 1.0 and 2.0 will co-exist for a long time, this new way of understanding deep learning is crucial and exciting, in my opinion.