I’m thrilled to report that I’ve been promoted to the position of Director of our bioinformatics group here at the University of Iowa. We are within the Iowa Institute of Human Genetics (IIHG) and we support clinical activities in the institute, but also a wide array of research collaborations across the University.
I have a lot of goals and ideas for the group and look forward to working to implement those going forward. I may not be able to write posts here as often, but I’ll try to keep up with it. We also have a new twitter account: @iowabioinfo. Please follow us there.
Excited to announce that I’ve been working with some fantastic computational biologists (that I met at ISMB2018) on a “10 simple rules” style paper for creating and promoting effective bioinformatics collaborations with wet-lab biologists. We will leverage our many years of combined bioinformatics core experience to create these “10 simple rules.”
We will touch on:
–experiment planning and design.
–data management plans, data QC, and record keeping.
–avoiding batch effects and contamination.
–managing expectations and developing clear communications.
–handling low-quality data when things do go wrong.
Hope to have this out by the end of 2018…watch this space!
I just got back from Great Lakes Bio 2017 (GLBIO2017) at the University of Illinois-Chicago (UIC) campus. It was a great meeting and I really enjoyed the quality of the research presented as well as the atmosphere of the campus and neighborhood.
I was very surprised by just how nice the Chicago “West Loop” neighborhood near Randolph Street and down towards Greektown really is. I had some great meals, including a memorable Italian dinner at Formentos.
But the purpose of this post is to briefly describe a few of my favorite talks from the meeting. So here goes, in no particular order:
I was really impressed with Kevin White’s GLBIO2017 talk and demo of his company’s technology (despite the ongoing technical A/V issues!) Tempus labs is a clinical sequencing company but also an informatics company focused on cancer treatment that seeks to pull together all of the disparate pieces of patient data that float around in EHR databases and are oftentimes not connected in meaningful ways.
The company sequences patient samples (whole exome and whole genome) and then also hoovers up reams of patient EHR data using Optical Character Recognition (OCR), Natural Language Processing (NLP), and human expert curation to turn the free-form flat text of medical records from different clinics and systems into a form of “tidy data” that can be accessed from an internal database.
Then, clinical and genomic data are combined for each patient in a deep-learning system that looks at treatments and outcomes for other similar patients and presents the clinician with charts that show how patients in similar circumstances fared with varying treatments, given certain facts of genotype and tumor progression, etc… The system is pitched as “decision support” rather than artificial “decision making.” That is, a human doctor is still the primary decider of treatment for each patient, but the Tempus deep learning system will provide expert support and suggest probabilities for success at each critical care decision point.
The system also learns and identifies ongoing clinical trials, and will present relevant trials to the clinician so that patients can be informed of possibly beneficial trials that they can join.
Murat Eren’s talk on tracking microbial colonization in fecal microbiome transplantation (i.e., “poop pills”) was excellent and very exciting. Although the “n” was small (just 4 donors and 2 recipients) he showed some very interesting results from transferring fecal microbiota (FM) from healthy individuals to those with an inflammatory bowel disease.
Among the interesting results are the fact that he was able to assemble 97 metagenomes in the 4 donor samples. Following the recipients at 4 and 8-weeks post FM transplant showed that the microbial genomes could be classed into those that transfer and colonize permissively (both recipients), those that colonize one or the other recipient, and those that fail to colonize both. Taxa alone did not explain why some microbes colonized easily, while other failed to colonize.
He also showed that 8 weeks post FM transplant, the unhealthly recipients had improved symptoms but also showed that in a PCA analysis of the composition of the recipient gut and the healthy human gut from 151 human microbiome project (HMP) samples, the recipients moved into the “healthy” HMP cluster from being extreme outliers on day 0.
He also investigated differential gene function enrichment between the permissive colonizers and the microbes that never colonized recipient’s guts and found that sporulation genes may be a negative factor driving the failure (or success) of transplantation. He proposed that the recent and notable failure of the Seres microbiome drug in clinical trials may be owing to the fact that the company killed the live cultures in favor of more stable spore-forming strains when formulating the drug. His work would suggest that these strains are less successful at colonizing new hosts.
With the ever-increasing volume of genomic and regulatory data and the complexity of that data, there is a need for accessible interfaces to it. Bo Zhang’s group at Penn State has worked to make a new type of genome browser available that focuses on the 3D structure of the genome, pulling together disparate datatypes including chromatin interaction data, ChIP-Seq, RNA-Seq, etc… You can also browse a complete view of the regulatory landscape and 3D architecture of any region of the genome. You can also check the expression of any queried gene across hundreds of tissue/cell types measured by the ENCODE consortium. On the virtual 4C page, they provide multiple methods to link distal cis-regulatory elements with their potential target genes, including virtual 4C, ChIA-PET and cross-cell-type correlation of proximal and distal DHSs.
All in all, GLBIO2017 was a very enjoyable and informative meeting where I met a lot of great colleagues and learned much. I am looking forward to next year!
I wanted to demonstrate further how powerful and straightforward the pandas library is for data analysis. A good example comes from the book “Bioinformatics Programming using Python,” by Mitchell Model. While this is an excellent reference book on Python programming, it was written before pandas was in widespread use as a library.
In the “Extended Examples” on p. 158 of Chapter 4, the author demonstrates some code to read in a text file containing the names of enzymes, their restriction sites, and the patterns that they match. The code takes the text file, cleans it up, and makes a dictionary that is searchable by key. This is done using core python tools only and it looks like this (note: I am using Python 2.7 hence the need to import “print_function” from “__future__”):
The last few lines of output from calling test() is as follows:
Hold onto your seats because you can do all of that and more with just 5 lines of code using pandas (if you don’t count the imports):
The read_table function can take regex separators (in this case “any number of white spaces”) when using the “python” engine option. We skip the first 8 rows because they have no information. The header is set as the second row after the skipped rows.
I then use a boolean mask to find the places where the condition “is_null” is true looking down the “pattern” column. This is because some rows lack a “site” entry, so pandas found only two data fields when separated on whitespace and thus left the third column empty, not knowing there was missing data. Wherever the pattern column is null, I assign the missing values into the pattern column from the site column. I then replace the site column values with “NaNs”.
The first few lines of the ‘rebase’ dataframe object look like this:
Technically, what I just did in pandas is not quite the same thing as the core python version above. It is in many ways far better. First, all of the blank spaces in the second column are now “NaN” instead of blanks. This makes data analysis easier. Second, the object “rebase” is a dataframe that allows access to all of the dataframe methods. It is also indexed by row and has named columns for easier interpretation. The dataframe also automatically “pretty prints” for easy reading, whereas the table created using core python has to be formatted with additional function definitions to print to stdout or to file in a readable way.