Breakthrough advances in 2018 so far: flu, germs, and cancer

2018 medicine breakthrough review!

So far this year has seen some pretty important research breakthrough advances in several key areas of health and medicine.  I want to briefly describe some of what we’ve seen in just the first few months of 2018.

Flu

A pharmaceutical company in Japan has released phase 3 trial results showing that its drug, Xofluza, can effectively kill the virus in just 24 hours in infected humans.  And it can do this with just one single dose, compared to a 10-dose, three day regimen of Tamiflu. The drug works by inhibiting an endonuclease needed for replication of the virus.

Germs

It is common knowledge that antibiotics are over-prescribed and over-used.  This fact has led to the rise of MRSA and other resistant bacteria which threaten human health.  Although it is thought that bacteria could be a source of novel antibiotics since they are in constant chemical warfare with each other, most bacteria aren’t culture-friendly in the lab and so researchers haven’t been looking at them for leads.  Until now.

Malacidin drugs kill multi-drug resistant S. Aureus in tests on rats.

By adopting whole genome sequencing approaches to soil bacterial diversity, researchers were able to screen for gene clusters associated with calcium-binding motifs known for antibiotic activity.   The result was the discovery of a novel class of lipo-peptides, called malacidins A and B.  They showed potent activity against MRSA in skin infection models in rats.

The researchers estimate that 99% of bacterial natural-product antibiotic compounds remain unexplored at present.

Cancer

2017 and 2018 have seen some major advances with cancer treatment.   It seems that the field is moving away from the focus on small-molecule drugs towards harnessing the patient’s own immune system to attack cancer.  The CAR-T therapies for pediatric leukemia appear extremely promising.  These kinds of therapies are now in trials for a wide range of blood and solid tumors.

A great summary of the advances being made is available here from the Fred Hutchinson Cancer Research Center.   Here is how Dr. Gilliland, President of Fred Hutch, begins his review of the advances:

I’ve gone on record to say that by 2025, cancer researchers will have developed curative therapeutic approaches for most if not all cancers.

I took some flak for putting that stake in the ground. But we in the cancer research field are making incredible strides toward better and safer, potentially curative treatments for cancer, and I’m excited for what’s next. I believe that we must set a high bar, execute and implement — that there should be no excuses for not advancing the field at that pace.

This is a stunning statement on its own;  but made even more so because it is usually the scientists in the day-to-day trenches of research who are themselves the most pessimistic about the possibility of rapid advances.

Additionally, an important paper came out recently proposing a novel paradigm for understanding and modeling cancer incidence with age.  For a long time the dominant model has been the “two-hit” hypothesis which predicts that clinically-observable cancers arise when a cell acquires sufficient mutations in tumor-suppressor genes to become a tumor.

This paper challenges that notion and shows that a model of thymic function decline (the thymus produces T-cells) over time better describes the incidence of cancers with age.   This model better fits the data and leads to the conclusion that cancers are continually arising in our bodies, but it is our properly functioning immune system that roots them out and prevents clinical disease from emerging.  This model also helps explain why novel cancer immunotherapies are so potent and why focus has shifted to supporting and activating T-cells.

Declining T cell production leads to increasing disease incidence with age.

 

Is CB-5083 a promising new weapon against multiple myeloma?

Why care about p97?

In my postdoc work, I participated in a large team effort at designing a small molecule inhibitor of the p97 AAA-ATPase.

A crystal structure of the p97 ATPase.  The D2 domain is shown in dark blue.

The funding for this project came from the National Cancer Institute (NCI) and was premised on the idea that inhibiting p97 in certain types of cancer cells that depend heavily on the endoplasmic-reticulum associated degradation pathway (ERAD) would have the effect of triggering the unfolded-protein response and apoptosis pathways within the rapidly growing tumor cell populations.   This is because p97 is a critical regulator and component of ERAD, and when it is inhibited, the cell experiences unbalanced protein homeostasis and unfolded protein stress.

Drug design is an extremely challenging problem, and even with a large group of researchers it took us several years to find a compound that showed promising inhibition against p97.   Our results were published in ACS Med Chem Letters in 2016.   The compound we discovered, indole amide 3, has high solubility, permeability, and stability.  It binds an allosteric site on the D2 domain  with sub-micromolar affinity.   Unfortunately, it just didn’t have enough binding affinity to be active in vivo.

A different approach yields new promise

At around the same time we were developing our allosteric inhibitor series, another group was developing an ATP competitive D2 domain inhibitor of p97, called CB-5083.  In contrast to our compound, this one binds directly to the D2 ATP enzyme site with nanomolar affinity.

CB-5083.

The compound also demonstrated potent and specific p97 inhibition activity in mouse xenograft models of tumors.

An advance in myeloma cancer therapy

A more recent paper (Nov 2017) shows activity for CB-5083 against multiple myeloma (MM) cell lines and in vivo MM models.  From the abstract:

CB-5083 decreases viability in multiple myeloma cell lines and patient-derived multiple myeloma cells, including those with background proteasome inhibitor (PI) resistance. CB-5083 has a unique mechanism of action that combines well with PIs, which is likely owing to the p97-dependent retro-translocation of the transcription factor, Nrf1, which transcribes proteasome subunit genes following exposure to a PI. In vivo studies using clinically relevant multiple myeloma models demonstrate that single-agent CB-5083 inhibits tumor growth and combines well with multiple myeloma standard-of-care agents.

Standard of care agents, like bortezomib, are proteasome inhibitors (PI).  Using a PI results in broad inhibition of the proteasome system across many cell types, not just tumor cells, and thus a high likelihood of side effects.  p97 is upstream of the proteasome and targeting it is more narrow in scope, because MM cells rely so heavily on the protein homeostasis activities of the ERAD pathway.

Hope for Phase 1 success

CB-5083 was also found to enhance the activity of bortezomib both in vitro and in vivo and also was active in bortezomib-resistance models of MM.  This paves the way for a potential combination therapy or another line of therapy if resistance develops as a result of earlier treatment with PIs.   Clinical trials are now ongoing in Phase 1 for patients who have exhausted other medications.  Hopefully CB-5083 makes it to the market soon, if trials prove it to be safe and efficacious, so that oncologists and patients have another weapon in the fight against MM.

Genomic landscape of metastatic cancer

Integrative genomics sheds new light on metastatic cancer

A new study from the University of Michigan Comprehensive Cancer Center has just been released that represents an in-depth look at the genomics of metastatic cancer, as opposed to primary tumors.   This work involved DNA- and RNA-Seq of solid metastatic tumors of 500 adult patients, as well as matched normal tissue sequencing for detection of somatic vs. germline variants.

tl;dr:

A good overview of the study at the level of scientific layperson can be found in this press release.  It summarizes the key findings (many of which are striking and novel):

  • A significant increase in mutational burden of metastatic tumors vs. primary tumors.
  • A long-tailed distribution of mutational frequencies (i.e., few genes were mutated at a high rate, yet many genes were mutated).
  • About twelve percent of patients harbored germline variants that are suspected to predispose to cancer and metastasis, and 75% of those variants were in DNA repair pathways.
  • Across the cohort, 37% of patient tumors harbored gene fusions that either drove metastasis or suppressed the cells anti-tumor functions.
  • RNA-Seq showed that metastatic tumors are significantly de-differentiated, and fall into two classes:  proliferative and EMT-like (endothelial-to-mesenchymal transition).

 A brief look at the data

This study provides a high-level view onto the mutational burden of metastatic cancer vis-a-vis primary tumors.  Figure 1C from the paper shows the comparison of mutation rates in different tumor types in the TCGA (The Cancer Genome Atlas) primary tumors and the MET500 (metastatic cohort).

Mutational burden in metastatic cancer compared to primary tumors.

 

Here we can see that in most cases (colored bars), metastatic cancers had statistically significant increases in mutational rates.   The figure shows that tumors with low mutational rates “sped up” a lot as compared with those primary tumor types that already had high rates.

Supplemental Figure 1d (below) shows how often key tumor suppressor and oncogenes are altered in metastatic cancer vs. primary tumors.  TP53 is found to be altered more frequently in metastatic thyroid, colon, lung, prostate, breast, and bladder cancers.   PTEN is mutated more in prostate tumors.  GNAS and PIK3CA are mutated more in thymoma, although this finding doesn’t reach significance in this case.  KRAS is altered more in colon and esophagus cancers, but again, these findings don’t reach significance after multiple correction.

Comparison of genetic alteration frequencies in metastatic and primary tumors.

 

One other figure I’d like to highlight briefly is Figure 3C from the paper, shown below:

Molecular structure of novel, potentially activating gene fusions in the metastatic tumors.

I wanted to mention this figure to illustrate the terrifying complexity of cancer.   Knowing which oncogenes are mutated, in which positions, and the effects of those mutations on gene expression networks is not enough to understand tumor evolution and metastasis.  There are also new genes being created that do totally new things, and these are unique on a per tumor basis.   None of the above structures have ever been observed before, and yet they were all seen from a survey of just 500 cancers.   In fact, ~40% of the tumors in the study cohort harbored at least one fusion suspected to be pathogenic.

There is much more to this work, but I will leave it to interested readers to go read the entire study.   I think this work is obviously tremendously important and novel, and represents the future of personalized medicine.  That is, a patient undergoing treatment for cancer will have their tumor or tumors biopsied and sequenced cumulatively over time to understand how the disease has evolved and is evolving, and to ascertain what weaknesses can be exploited for successful treatment.

Filtering variants for cancer mutational signature analysis

Recently, I’ve been working to help prepare a manuscript on Vestibular Schwannomas (VS), a type of benign cancer of the myelin-forming cells along the nerves of the ear.  I’ve been thinking a lot about strategies for filtering exome variant calls to feed into mutational signature analysis.

Mutational signatures are important because they describe the types of mutational processes operating on the genome of the tumor cells.  Many of these processes are known (see the COSMIC database), however, some are entirely novel.  The variants that are used for calculating such signatures are somatic in nature, and have to be carefully curated from the raw variant calls that you get from a pipeline like GATK.

Looking at the existing literature, I find that there is no common or “best practices” methodology for filtering variants in whole exome data.  Some groups are very stringent, others less so.  The first step in most cases is to just subtract normal variant calls from tumor in most cases.  However, there are further filtering steps that should be undertaken.

If I had to describe some overall commonalities in the literature approaches to somatic variant filters, it could include:

1) removing variants that are present in dbSNP or 1000genomes or other non-cancer exome data
2) taking only variants in coding regions (exons) or splicing sites
3) variants must appear in more than X reads in the tumor, and fewer than X reads in the normal (generally ~5 and ~2, respectively)
4) subtraction of “normals” from “tumor” (either pooled normals, or paired)
5) variant position must be covered by a minimum depth (usually > 10X)
6) throwing away reads from low mapping quality (MQ) regions

Some papers only consider non-synonymous variants, but for mutational signatures, to me it makes sense to take all validated variants (especially in exome data because you are starting with fewer raw variant calls than whole genome data).

As far as actual numbers of variants that are “fed” into the mutational signature analysis, most papers do not report this directly (surprisingly).  If you dig around in the SI sections, sometimes you can find it indirectly.

It looks like, generally, the number of variants is somewhere around 10,000 for papers dealing with specific tumor types (not pan-cancer analyses of public databases). Several papers end up with ~1000 variants per tumor (ranging from 1,000 up to 7,000).  So with 10 tumors sequenced, that would be 10,000 filtered, high-confidence SNVs.

If you’re working on exome mutational signature analysis and you have your own filtering criteria, I’d love for you to share it in the comments.

Hands-on with cancer mutational signatures, part 2

In this second part of the “Hands On” series, I want to address how to create the input for the MatLab mutational signature framework from the output of my python code to prepare the SNP data for analysis.

First, creating a Matlab .mat file for input to the program.   The code is expecting an input file that contains a set of mutational catalogues and metadata information about the cancer type and the mutational types and subtypes represented in the data.

Fig 1. The required data types within one .mat file to run the framework.
Fig 1. The required data types within one .mat file to run the framework.

As you can see from Fig 1., you need to provide a 96 by X matrix, where X is the number of samples in your mutational catalogue.  You also need an X by 1 cell array specifying sample names, a 96 by 1 cell array specifying the subtypes (ACA, ACC, ACG, etc…) and a 96 by 1 cell array specifying the types (C>A, C>A, C>A, etc…).  These must correspond to the same order as specified in the “originalGenomes” matrix or the results won’t make sense.

My code outputs .csv files for all of these needed inputs.  For example, when you run my python code on your SNP list, you will get a “subtypes.csv”, “types.csv”, “samples.csv”, and “samples_by_counts.csv” matrix (i.e., originalGenomes) corresponding to the above cell arrays in Fig 1.

Now, the challenge is to get those CSV files into MatLab.  You should have downloaded and installed MatLab on your PC.  Open MatLab and select “Import Data.”

Fig 2. Select the "Import Data" button.
Fig 2. Select the “Import Data” button.

Browse to one of the output CSV files and select it.  It will open in a new window like in Fig 3 below:

Fig 3. The data import window from MatLab.
Fig 3. The data import window from MatLab.

Be sure to select the correct data type in the “imported data” section.  Also, select only row 1 for import (row 2 is empty).  Once you’re finished, click Import Selection.  It will create a 1×96 cell called “types.”  It looks like this:

Fig 4. The new imported cell data "types."
Fig 4. The new imported cell data “types.”

We’re almost done, but we have to switch the cell to be 96×1 rather than 1×96.  To do this, just double-click it and select “transpose” in the variable editor.   Now you should repeat this process for the other CSV input files, being sure to select “matrix” as the data type for the “samples_by_counts” file.   Pay special attention to make sure the dimensions and data types are correct.

Once you have everything in place you should be ready do run the mutational analysis framework from the paper.   To do this, open the “example2.m” Matlab script included with the download.  In the “Define parameters” section, change the file paths to the .mat file you just created:

Fig 5. Define your parameters for the signature analysis.
Fig 5. Define your parameters for the signature analysis.

 

Here you can see in Fig 5, I’ve specified 100 iterations per core, a number of possible signatures from 1-8, and the relative path to the input and output files.  The authors say that ~1000 iterations is necessary for accuracy, but I’ve found little difference in the predictions between 50-500 iterations.   I would perform as many iterations as possible, given constraints from time and computational power.

Note also that you may need to change the part of the code that deals with setting up a parallel computing pool.  Since MatLab 2014, I believe, the “matlabpool” processing command has been deprecated.   Substituting the “parpool” command seems to work fine for me (Mac OS X 10.10, 8 core Macbook Pro Retina) as follows:

if ( parpool('local') == 0 )

parpool open; % opens the default matlabpool, if it is not already opened

end

This post is getting lengthy, so I will stop here and post one more part later about how to compare the signatures you calculate with the COSMIC database using the cosine similarity metric.