Category Archives: computing

My favorite talks from GLBIO2017 in Chicago

GLBIO2017

I just got back from Great Lakes Bio 2017 (GLBIO2017) at the University of Illinois-Chicago (UIC) campus.  It was a great meeting and I really enjoyed the quality of the research presented as well as the atmosphere of the campus and neighborhood.

I was very surprised by just how nice the Chicago “West Loop” neighborhood near Randolph Street and down towards Greektown really is.  I had some great meals, including a memorable Italian dinner at Formentos.

But the purpose of this post is to briefly describe a few of my favorite talks from the meeting.  So here goes, in no particular order:

Kevin White, Tempus Labs:

I was really impressed with Kevin White’s GLBIO2017 talk and demo of his company’s technology (despite the ongoing technical A/V issues!)  Tempus labs is a clinical sequencing company but also an informatics company focused on cancer treatment that seeks to pull together all of the disparate pieces of patient data that float around in EHR databases and are oftentimes not connected in meaningful ways.

The company sequences patient samples (whole exome and whole genome) and then also hoovers up reams of patient EHR data using Optical Character Recognition (OCR), Natural Language Processing (NLP), and human expert curation to turn the free-form flat text of medical records from different clinics and systems into a form of “tidy data” that can be accessed from an internal database.

Then, clinical and genomic data are combined for each patient in a deep-learning system that looks at treatments and outcomes for other similar patients and presents the clinician with charts that show how patients in similar circumstances fared with varying treatments, given certain facts of genotype and tumor progression, etc…  The system is pitched as “decision support” rather than artificial “decision making.”  That is, a human doctor is still the primary decider of treatment for each patient, but the Tempus deep learning system will provide expert support and suggest probabilities for success at each critical care decision point.

The system also learns and identifies ongoing clinical trials, and will present relevant trials to the clinician so that patients can be informed of possibly beneficial trials that they can join.

Murat Eren,  merenlab.org

Murat Eren’s talk on tracking microbial colonization in fecal microbiome transplantation (i.e., “poop pills”) was excellent and very exciting.  Although the “n” was small (just 4 donors and 2 recipients) he showed some very interesting results from transferring fecal microbiota (FM) from healthy individuals to those with an inflammatory bowel disease.

Among the interesting results are the fact that he was able to assemble 97 metagenomes in the 4 donor samples.  Following the recipients at 4 and 8-weeks post FM transplant showed that the microbial genomes could be classed into those that transfer and colonize permissively (both recipients), those that colonize one or the other recipient, and those that fail to colonize both.  Taxa alone did not explain why some microbes colonized easily, while other failed to colonize.

He also showed that 8 weeks post FM transplant, the unhealthly recipients had improved symptoms but also showed that in a PCA analysis of the composition of the recipient gut and the healthy human gut from 151 human microbiome project (HMP) samples, the recipients moved into the “healthy” HMP cluster from being extreme outliers on day 0.

He also investigated differential gene function enrichment between the permissive colonizers and the microbes that never colonized recipient’s guts and found that sporulation genes may be a negative factor driving the failure (or success) of transplantation.   He proposed that the recent and notable failure of the Seres microbiome drug in clinical trials may be owing to the fact that the company killed the live cultures in favor of more stable spore-forming strains when formulating the drug.  His work would suggest that these strains are less successful at colonizing new hosts.

Bo Zhang, 3D genome browser

With the ever-increasing volume of genomic and regulatory data and the complexity of that data, there is a need for accessible interfaces to it.   Bo Zhang’s group at Penn State has worked to make a new type of genome browser available that focuses on the 3D structure of the genome, pulling together disparate datatypes including chromatin interaction data, ChIP-Seq, RNA-Seq, etc…  You can also browse a complete view of the regulatory landscape and 3D architecture of any region of the genome.  You can also check the expression of any queried gene across hundreds of tissue/cell types measured by the ENCODE consortium.  On the virtual 4C page, they provide multiple methods to link distal cis-regulatory elements with their potential target genes, including virtual 4C, ChIA-PET and cross-cell-type correlation of proximal and distal DHSs.

The 3D Genome Browser flow chart.

 

All in all, GLBIO2017 was a very enjoyable and informative meeting where I met a lot of great colleagues and learned much.  I am looking forward to next year!

Search speed comparison: naive exact vs. boyer-moore vs. k-mer index

Recently, I’ve been working my way through Ben Langmead’s excellent introduction to “Algorithms for DNA sequencing” on Coursera.com.    The class is a fascinating and well-taught intro to concepts about DNA short read alignment and assembly methods.

As part of the course, we have implement or modify python code relating to several simple matching algorithms, including the “naive exact” (NEM) matching method, the “boyer-moore” (BM) method, and a k-mer index approach.

I was curious about speed, so I made a figure showing the computational time that each approach takes.  P and T refer to the length of the short read to be aligned and the genome to align to, respectively.

Figure 1. Comparing the speed of the NEM, BM, and K-mer search methods on long and short patterns (P) and texts (T). The y-axis is on a log-scale.

Note that the y-axis is a log scale in units of microseconds.  Right away, it is obvious that k-mer index methods are orders of magnitude faster than ‘online’ methods like NEM and BM.

Also of interest is the fact that as the pattern gets shorter, the advantage of BM preprocessing of the pattern gets smaller.  You can see that going from 30 to 11 pattern length negates any advantage to BM searching.

 

A Unix one-liner to scrape GI numbers from a SAM file

I recently had a situation where I needed to scrape out all of the GI numbers from a SAM alignment file.  My first instinct was to turn to python to accomplish this, but I forced myself to find a command line tool or set of tools to quickly do this task as a one-liner.  First, here is the format of the first two lines of the file:

…..

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|599154892|gb|EYE94125.1| 1066 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:88 NM:i:0 ZL:i:2534 ZR:i:217 ZE:f:3.9e-15 ZI:i:100 ZF:i:-2 ZS:i:125 MD:Z:41

HWI-D00635:61:C6RH0ANXX:4:1101:3770:8441 0 gi|115387347|ref|XP_001211179.1| 1065 255 41M * 0 0 SNPDEMDGNILPWMVHLKRMALEVLKHLWSSKLAAFFTLSE * AS:i:77 NM:i:8 ZL:i:1670 ZR:i:188 ZE:f:8.9e-12 ZI:i:80 ZF:i:-2 ZS:i:125 MD:Z:1Y3V3V11F11SSY3A1

….

You can see that what I want is the information between the pipes in the field “gi|#########|” .   Here is how I solved this with a bash script:


#!/bin/bash
for file in *.sam
do
echo ${file}
cat ${file} | egrep -o  "\|\S*\|(\S*)\|" | sed 's/|/,/g' | cut -f 2 -d ',' > ${file}.out
done

To unpack this briefly, the “cat” command outputs each line of the file to stdout, which is redirected to “egrep.”   Egrep looks for the regular expression “\|\S*\|(\S*)\|”.   This expression searches for a pipe, followed by any number of characters, with another pipe, then more characters, then another pipe.  The pipes are escaped with a backslash “\”.

The next step is to pipe to “sed”, which takes the incoming stream and replaces the pipes with commas.  This output is sent to “cut”, which uses the commas as delimiters, and takes the second field.

There are probably shorter ways to do this (cutting on pipes, for example), but already attempting this at the command line saved me a lot of time over coding this in python.

Hands-on with cancer mutational signatures, part 2

In this second part of the “Hands On” series, I want to address how to create the input for the MatLab mutational signature framework from the output of my python code to prepare the SNP data for analysis.

First, creating a Matlab .mat file for input to the program.   The code is expecting an input file that contains a set of mutational catalogues and metadata information about the cancer type and the mutational types and subtypes represented in the data.

Fig 1. The required data types within one .mat file to run the framework.
Fig 1. The required data types within one .mat file to run the framework.

As you can see from Fig 1., you need to provide a 96 by X matrix, where X is the number of samples in your mutational catalogue.  You also need an X by 1 cell array specifying sample names, a 96 by 1 cell array specifying the subtypes (ACA, ACC, ACG, etc…) and a 96 by 1 cell array specifying the types (C>A, C>A, C>A, etc…).  These must correspond to the same order as specified in the “originalGenomes” matrix or the results won’t make sense.

My code outputs .csv files for all of these needed inputs.  For example, when you run my python code on your SNP list, you will get a “subtypes.csv”, “types.csv”, “samples.csv”, and “samples_by_counts.csv” matrix (i.e., originalGenomes) corresponding to the above cell arrays in Fig 1.

Now, the challenge is to get those CSV files into MatLab.  You should have downloaded and installed MatLab on your PC.  Open MatLab and select “Import Data.”

Fig 2. Select the "Import Data" button.
Fig 2. Select the “Import Data” button.

Browse to one of the output CSV files and select it.  It will open in a new window like in Fig 3 below:

Fig 3. The data import window from MatLab.
Fig 3. The data import window from MatLab.

Be sure to select the correct data type in the “imported data” section.  Also, select only row 1 for import (row 2 is empty).  Once you’re finished, click Import Selection.  It will create a 1×96 cell called “types.”  It looks like this:

Fig 4. The new imported cell data "types."
Fig 4. The new imported cell data “types.”

We’re almost done, but we have to switch the cell to be 96×1 rather than 1×96.  To do this, just double-click it and select “transpose” in the variable editor.   Now you should repeat this process for the other CSV input files, being sure to select “matrix” as the data type for the “samples_by_counts” file.   Pay special attention to make sure the dimensions and data types are correct.

Once you have everything in place you should be ready do run the mutational analysis framework from the paper.   To do this, open the “example2.m” Matlab script included with the download.  In the “Define parameters” section, change the file paths to the .mat file you just created:

Fig 5. Define your parameters for the signature analysis.
Fig 5. Define your parameters for the signature analysis.

 

Here you can see in Fig 5, I’ve specified 100 iterations per core, a number of possible signatures from 1-8, and the relative path to the input and output files.  The authors say that ~1000 iterations is necessary for accuracy, but I’ve found little difference in the predictions between 50-500 iterations.   I would perform as many iterations as possible, given constraints from time and computational power.

Note also that you may need to change the part of the code that deals with setting up a parallel computing pool.  Since MatLab 2014, I believe, the “matlabpool” processing command has been deprecated.   Substituting the “parpool” command seems to work fine for me (Mac OS X 10.10, 8 core Macbook Pro Retina) as follows:

if ( parpool('local') == 0 )

parpool open; % opens the default matlabpool, if it is not already opened

end

This post is getting lengthy, so I will stop here and post one more part later about how to compare the signatures you calculate with the COSMIC database using the cosine similarity metric.

Mutational signatures in cancer with DNA-Seq

A recent collaboration with a clinician here at UI hospital and clinics introduced me to the idea of mutational signatures in cancer.  Characterizing mutational signatures is made possible by the falling cost and increasing accuracy of whole-genome sequencing methods.  Tumors are sequenced across the entire genome and the catalog of somatic mutations (i.e, SNPs) is used to compute the mutational signatures of a tumor’s genome.

The idea is that the collection of somatic mutations found in a tumor  are the result of a variety of defective DNA-repair or DNA-replication machinery combined with the action of known or unknown mutagens and environmental exposures.  The processes operate over time and leave a “footprint” in the tumor DNA that can be examined.  These sum of all of the mutational processes operating within a tumor cell is a distinct mutational “signature” that differs by tumor types.

For example, in lung cancer, the bulk of somatic mutations are C>A transversions resulting from chronic exposure to tobacco smoke.  In melanoma, the predominant mutation type is C>T and CC>TT at dipyrimidines, a mutation type associated with UV-light exposure.  And in colorectal cancer, defective DNA mismatch repair contributes the majority of the mutations.

Mutational signatures of a cancer by the operation of several mutational processes over time.
Mutational signature of a cancer by the operation of several mutational processes over time. From http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3990474/figure/fig0005/. Used under CC License BY 3.0.

A recent paper in Nature has formalized this notion of mutational signatures in tumors and provided a mathematical framework (written in MatLab) for assessing how many and which signatures are operational within an uncharacterized tumor type (generally there between 2 and 6 processes).

In the paper, the authors analyzed almost 5 million somatic cancer SNPs and identified 21 unique signatures of mutational processes through a mathematical process of deconvolution, followed by experimental validation.  A curated catalog of the most current signatures based on available sequence data can be found at the COSMIC database.

In part 2 of this post, I’ll go into more detail on the mutational signatures and link to some python code I’ve written to help get flat-file lists of SNPs into the correct form for easy input into the MatLab framework.

 

 

 

Exploratory analysis of human splice-altering variants

Single splice-altering variants can alter mRNA structure and cause disease

The splicing of introns and joining of exons to form mRNA is dependent on complex cellular machinery and conserved sequences within introns to be performed correctly.  Single-nucleotide variants in splicing consensus regions, or “scSNVs” (defined as −3 to +8 at the 5’ splice site and −12 to +2 at the 3’ splice site)  have the potential to alter the normal pattern of mRNA splicing in deleterious ways.  Even those variants that are exonic and synonymous (i.e., they do not alter the amino acid incorporated into a polypeptide) can potentially affect splicing.  Altered splicing can have important downstream effects in human disease such as cancer.

Using machine-learning to predict splice-altering variants

In the paper “In silico prediction of splice-altering single nucleotide variants in the human genome,”  the researchers took on the problem of predicting which single-nucleotide variants (SNVs) have the potential to be splice-altering by computing “ensemble scores” for potential variants, combining the results of several popular splicing prediction software tools into one probability score.

They did this by using “random forest” (rf) and “adaptive-boosting” (adaboost) classifiers from machine-learning methods to give improved ensemble predictions that are demonstrated to do better than predictions from an individual tool, leading to improvements in the sensitivity and specificity of the predictions.

As part of their supplementary material, the authors pre-computed rf and adaboost scores for every SNV in a library of nearly ~16 million such sites collated from human RefSeq and Ensembl databases.   The scores are probabilities of a particular SNV being splice-altering (0 to 1).

Exploratory analysis of the database

I performed an exploratory data analysis of chromosome 1 (chr1) SNVs from the  database that was made available with the paper.

First, I just looked at where the SNVs on chrom 1 were located as classified by Ensembl region:

chr1_num_variants_by_ensemblregion
Fig 1. Number of variants in the scSNV database on chromosome 1  by Ensembl region. Not surprisingly, ‘intronic’, ‘exonic’, and ‘splicing’ are the most common regions for potential splice-altering SNPs.

As can be seen from Fig 1, most of the SNVs are located in introns, exons, and splicing consensus sites according to their Ensembl records.

Next, I created histograms for the chrom 1 SNVs by their Ensembl classification, looking at rf scores only (keep in mind that the scale on the  y-axis for the plots in Fig 2 and 3 differs dramatically between regions).  The x-axis is the probability of being splice-altering according to the pre-computed rf score.

chr1_rfscore_by_ensemblregion
Fig 2. Random-forest (rf) score by ensembl region for the ~15 M scSNVs in the database.

I noticed the fact that within ‘exonic’ regions on chrom 1, the rf scores take on a range of values from 0.0 to 1.0 in a broad distribution, while in other regions like ‘UTR3’, ‘UTR5’, ‘downstream’, etc… the distributions are narrowly skewed towards zero.  For the ‘intronic’ region, the majority of sites have low probability of being splice-altering, while at the ‘splicing’ consensus sites, the vast majority are predicted to be splice-altering variants.  This appears to make intuitive sense.

I performed the same analysis for the adaboost scores, as shown in Fig 3 (below).  You can see that the adaboost scores take on a more binary distribution than the rf scores, with any individual SNV likely to be classified as ~1 or 0 according to the adaptive boosting method.  Just like the rf scores, SNVs in ‘exonic’ regions are equally likely to be splice-altering as not, while those in ‘splicing’ regions are highly likely to be splice-altering.  An SNV in an ‘intronic’ regions is ~3X more likely to have no effect on splicing.

chr1_adascore_by_ensemblregion
Fig 3. Ada score by Ensembl region for the scSNV database.

 

Finally, I looked at the relationship between the two scoring methods for the SNVs that fall within the Ensembl-characterized ‘splicing’ regions on chrom 1.  That scatter plot is shown below in Fig 4.

I suppose I was expecting a tight linear correlation between the two approaches, however the data show that the rf and adaboost methods differ substantially in their assessment of the collection of SNVs in these regions.

It is obvious from the plot below that there are many SNVs that the rf method considers to have low probability of being splice-altering that are found to have very high (>0.9) probability by the adaboost method.

Fig 4. Scatter plot of rf score vs. ada score for SNVs within 'splicing' regions on chrom 1.
Fig 4. Scatter plot of rf score vs. ada score for SNVs within ‘splicing’ regions on chrom 1.

This result would appear to suggest that if one is going to classify variants as “splice-altering” from this database, it would be best to consider both predictions or some combination of them rather than relying on either score alone if the goal is not to miss any potentially important sites.  Conversely, if the goal is to only consider sites with very high likelihood of being splice-altering, a threshold could be set such that both scores need to be above 0.8, for example.

Concatenate several lanes of Illumina HiSeq reads quickly

If you have raw Illumina HiSeq reads or MiSeq run across several lanes of data, you may need to concatenate them together at the command line before trimming and merging them together. Raw data from Illumina sequencers generally follows a standard naming convention. The files might look like this:

16_TAAGGCGA-TATCCTCT_L004_R1_001.fastq
16_TAAGGCGA-TATCCTCT_L005_R1_001.fastq
16_TAAGGCGA-TATCCTCT_L006_R1_001.fastq
16_TAAGGCGA-TATCCTCT_L007_R1_001.fastq
16_TAAGGCGA-TATCCTCT_L008_R1_001.fastq

Where ’16’ is the sample ID, followed by the barcode sequence, ‘L00X’ is the lane number, ‘R1’ means forward reads.

An easy way to script this quickly is as follows:

cat `find . -maxdepth 1 | egrep '16.*R1'` > 16_TAGGCGA_R1.fastq &

There’s a lot going on here, so let’s unpack this briefly.

The backquotes (` ) mean that the output of the command is presented as command-line parameters to the enclosing command.

So in this case the ‘find’ command is listing all file names in the current directory at a depth of 1 (no recursion into lower directories).

Next, this is pipe’d (with |) to egrep which searches the list of filenames for a regular expression that indicates a string starting with 16, followed by any number of any characters, then an ‘R1’. Since the expression matches whole strings, this will find the above files.

These filenames are then passed to ‘cat’ as command line arguments. The concatenated files are then redirected with the greater than (‘>’) to the new fastq file. The ‘&’ indicates that the shell run this in the background.

I hope this is useful. I spent about an hour tracking this all down, so I wouldn’t have to process dozens of samples by hand.

Spot the dancing gorilla to code better python

OK, OK, I know the title of this post falls into the gray area between informative and “click-bait.” 

However, now that you’re here, watching the following talks by Python Core Developer and coding guru Raymond Hettinger will be both immediately useful and highly entertaining! 

PyCon 2015 — Beyond PEP8

Can you spot the dancing gorilla in your code?

PyCon 2013 — Class Development toolkit

From Mom’s basement to a loft in SOMA, Python classes solve your startup woes

 

How not to use IPython.parallel on a laptop

In this post I want to focus on an aspect of using the IPython.parallel implementation that may be confusing to new users.

In the IPython.parallel documentation, one of the first things you do to show that you have started the parallel python engines is a call to python’s “map” method with a lambda function that takes x to the 10th power over a range of x.

In serial (non-parallel) form that is as follows:

serial_result = map(lambda x:x**10, range(100))

Then, you do the same in parallel with the python engines you’ve started:

parallel_result = lview.map(lambda x:x**10, range(100))

Then, you assert that the results are the same:

assert serial_result == parallel_result

 

This works fine, but there is a problem. You would probably never actually use an IPython.parallel client for work like this. Given that the documentation is aimed at introducing new users, it is a bit confusing to present this simple example without the caveat that this is not a typical use case.

Here is why you’d never actually code this calculation in parallel:

In [8]: %timeit map(lambda x:x**10, range(3000))
100 loops, best of 3: 9.91 ms per loop

In [9]: %timeit lview.map(lambda x:x**10, range(3000))
1 loops, best of 3: 22.8 s per loop

 

Notice that the parallel version of this calculation over a range of just 3000, took 22 secs to complete! That is 2,300 times slower than just using one core and the built-in map method.

Apparently, this surprising result is because there is a huge amount of overhead associated with distributing the 3000 small, very fast jobs in the way I’ve written statement [9] above.   Every time the job is distributed to an engine, the function and data have to be serialized and deserialized (“pickled”), if my understanding is correct.

In response to my StackOverflow question on this issue, Univerio helpfully suggested the following more clever use of parallel resources (he is using 6 cores in this example):

In [7]: %timeit map(lambda x:x**10, range(3000))
100 loops, best of 3: 3.17 ms per loop

In [8]: %timeit lview.map(lambda i:[x**10 for x in range(i * 500)], range(6))  # range(6) owing to 6 cores available for work
100 loops, best of 3: 11.4 ms per loop

In [9]: %timeit lview.map(lambda i:[x**10 for x in range(i * 1500)], range(2))
100 loops, best of 3: 5.76 ms per loop

Note that what Univerio is doing in line [8] is to distribute equal shares of the work across 6 cores. Now the time to complete the task is within the same order of magnitude as the single-threaded version. If you use just two tasks in example [9], the time is cut in half again owing to less overhead.

The take-home message is that if you’re going to expend the overhead necessary to setup and start multiple IPython.parallel engines and distribute jobs to them, the jobs need to be more resource-consuming than just a few ms each.  And you should try to make as few function calls as possible.  Each call should do as much work as possible.

High performance computing versus high throughput

high performance computing
Xserve G5 supercomputer. Image credit: Christopher Bowns, Flickr

 

Two approaches to scientific computing

The terms “high performance computing” (HPC) and “high throughput computing” (HTC) might sound interchangeable to those not familiar with scientific computing, but they denote two very different approaches to computing.  I’m going to describe the difference below (with the caveat that I have only a layman’s understanding of this field).

High throughput computing is for many smaller tasks

HTC is a computing approach that aims to make available a large number of computers to quickly accomplish tasks that are easily broken up into smaller, independent components.  For example, if you have to process 100 video clips, and each one takes ~1 hr, then you would need ~100 hrs of computing time on your laptop.

However, if you had 100 laptops, you could theoretically do the task in 1 hr assuming that you could instantly command each one to begin the processing task (in reality, of course, you’d have to run around setting up the task on each computer which could take longer than the compute time).  The point is this: each video processing task is independent of the others.

It is these types of tasks that HTC aims to address.  By providing many hundreds or thousands of networked CPUs in a cluster and a software application that can easily and automatically track and distribute hundreds of tasks (called a DRM or distributed-resource manager) an HTC user can submit a task such as the video processing example described above and have it automatically farmed out to 100 compute nodes for processing (in the HTC world this is called a “pleasantly parallel” problem).  Once each node completes, the data are copied back into the user’s home folder and it appears to the user that they have just used an extremely fast computer, when in fact they have used 100 computers working simultaneously.

High performance computing is for difficult computational problems

Now, however, consider the case of a computational task where each subunit is not independent of all of the others.  One that I am intimately familiar with is Molecular Dynamics (MD) simulations of protein structure and dynamics.  In MD simulations, an algorithm simulates the atomic motions of a protein molecule immersed in a box of waters on a very short timescale (somewhere on the order of a microsecond).  Even with the short timescale, this is a very compute-intensive task.  But because each atom in the protein interacts with many other atoms in the system, it is a task that can’t be neatly broken down into independent components in the way that video processing can be.   You can’t simply give each atom to a separate compute node.  In effect, MD simulation is a single, extremely resource-intensive computation.

Enter high performance computing.  In HPC (also called supercomputing), the aim is to build hardware and software that are focused on peak computing capability (i.e., speed) and extremely fast interconnectedness, rather than on the number of simultaneous tasks that can be accomplished.   The “high performance” part of HPC comes about from the technological focus on networking the computational nodes together with extremely fast connections so that communicating data and messages back and forth does not become a significant bottleneck to completing a large-scale computation.

On the software side of HPC, code libraries like MPI have been developed that allow simulations to be “parallelized” (i.e.,  broken down into smaller pieces).  These smaller pieces (called “domain decomposition” for MD simulation) are then farmed out to the compute nodes of an HPC supercomputer and they can exchange data in real time so that each part of the simulation “knows” about the results from every other part.  In this way, the velocities and positions of certain atoms of a protein can be influenced by all of the other velocities and positions of atoms even if they are being simulated on different CPU nodes.