Tackling challenging targets with Chemotype Evolution

Carmot Therapeutics, a small company located in San Francisco’s Mission Bay, has developed a very innovative drug discovery technology, called Chemotype Evolution (CE), that relies on fragment-based discovery but is different from traditional FBDD and HTS approaches in important ways.

The first important innovation is that CE relies on a “bait” molecule as a starting point for screening.  The bait can be a known ligand, cofactor, or inhibitor.  The bait is then derivatized with a linker moiety that allows it to become chemically bonded with every fragment in a proprietary library.  This process generates a screening library that contains thousands of bait-fragment hybrids.

The most powerful aspect of CE is the ability to iterate over chemical space, allowing access to an exponential number of possible fragment-bait hybrids.

These hybrids are then screened against the target for binding using either biophysical or biochemical screening techniques in a high-throughput plate format.
The most powerful aspect of CE is the ability to iterate over chemical space, allowing access to an exponential number of possible fragment-bait hybrids.  The method can be iterated with new “baits” derived from the best fragment hits of the previous round.  Thus, instead of having 7,000 fragments in your library, after 3 iterations you access 7,000^3 possible combinations (343 billion possible compounds), selecting only the most target-relevant chemotypes at each stage.

figure-image
Schematic of the Chemotype Evolution process through 3 iterations. Note that at any point after each iteration, the hit molecules can be taken into hit-to-lead optimization.

The CE approach is similar in concept to the “tethering” approach pioneered at Sunesis, but differs in the fact that no protein engineering of cysteine residues needs to be performed.  The bait molecule performs the role of the engineered cys, providing a “handle” that binds to the target and selects for complementary fragment binders.

Carmot Therapeutics just embarked upon their first major industry collaboration with the January 2014 announcement of a partnership with Amgen

Carmot Therapeutics just embarked upon their first major industry collaboration with the January 2014 announcement of a partnership with Amgen to use CE technology against two challenging targets.  Identifying leads and developing hits will be carried out jointly between the companies, while clinical trials will proceed at Amgen.  I think Carmot is definitely a company to watch given its innovative and potentially paradigm-shifting discovery technology and increasing interest from big pharma.

 

 

 

The peril of big (flu) data

There is an interesting new post at “In the Pipeline” that summarizes the performance of Google’s “big data” project to track flu trends from search terms.  In short, the predictive performance appears to be pretty bad so far, at least compared to what you might have expected given the hype around “big data.”  The author raises some key points, including the importance of high-quality data, even in very large datasets.  I particularly like this analogy:

“The quality of the data matters very, very, much, and quantity is no substitute. You can make a very large and complex structure out of toothpicks and scraps of wood, because those units are well-defined and solid. You cannot do the same with a pile of cotton balls and dryer lint, not even if you have an entire warehouse full of the stuff.”  –In the Pipeline, March 24, 2014

Data filtering and modeling approaches will likely continue to improve, however, and I think this project is worth watching in the future.

 

Improve your docked poses with receptor flexibility

I have noticed that rigid docking methods, even when run with high-precision force fields, don’t always capture the correct poses for your true positives.  Sometimes a hit will be docked somewhere other than into the site that you specified because the algorithm could not fit the molecule into the rigid receptor.  This will cause true positives to be buried at the bottom of your ranked list.

You may want to try introducing receptor flexibility to improve the poses of your true positives.  There are two main ways to do this:  scale down the Van der Waals interactions to mimic flexibility (i.e., make the receptor atoms “squishy”) or use induced-fit docking (IFD) methods.  I have found that while setting a lower threshold for VdW scaling can rescue false negatives (poorly docked true binders), at least in one case, it does not improve the overall ranking of all of the true positives.  So it is not a panacea.

Induced fit methods work by mutating away several side chains in the binding pocket, docking a compound, mutating the side chains back, and energy minimizing the structure.  Then the compound is re-docked to the minimized structure using a high-precision algorithm.  There are two main applications for IFD: (1) improving the pose of a true positive that cannot be docked correctly by rigid docking and (2) rescuing false negatives.

My experience has been that IFD improves the docking scores of true positives and false positives by about the same amount, so the value of running the method on an entire library remains unclear.  However, there is much value in running IFD on a true hit where you are not sure the rigid pose is optimal.  Often, the improvement in the shape complementarity and number of interactions will be dramatic.

Also, you can use the alternative receptor conformations generated by IFD to a true positive to rescreen your library with faster rigid docking methods.  If you are screening on a prospective basis, this approach could help you identify other chemotypes that may bind well but are missed in a first pass rigid docking screen.

Why isn’t pharma making blockbuster antibiotics?

It seems intuitive that there would be a large market for new, highly-effective antibiotics.  Doctors are warning publicly about the waning effectiveness of today’s antibiotics owing to over-prescription and increased drug resistance.

The linked article even mentions that a course of action could be to provide government incentives to the industry to make new antibiotics.  But where the market creates a profit potential, why would government incentives be necessary in the first place?

I had never heard a suitable explanation for this situation until recently, in a conversation, the following theory was advanced:  if new wonder drugs are developed, they will be “held back” by doctors seeking to establish last-line-of-defense antibiotics, and will therefore not be heavily prescribed, dramatically limiting profitability.

Does the above explanation make sense?  Is there more to the story?  Share your thoughts in the comments below.

 

 

 

Why you should think exponentially to grasp the future of medicine

People often assume that the world tomorrow will be pretty much like the world today.  We all have an in-built bias towards linear thinking when we ponder the future.  Although a linear bias was helpful for thousands of years of our evolution, today technology is changing at an exponential pace and in order to better anticipate future market opportunities and technology’s impact on society, it is crucial to think in terms of exponential trends.  This is a point that renowned futurist Ray Kurzweil has made in his many books and speeches for the last several decades. 

We all have an in-built bias towards linear thinking when we ponder the future.

One example of an exponential trend in biology (among many) is the cost per genome sequence (graph below).  As recently as 2001, the cost to sequence a genome was an astronomical $100M.  Between 2001 and 2007, the cost decreased exponentially (a straight line on a log plot), to the point where a genome in 2007 cost only $10M to sequence.  Around 2007, a paradigm shift in technology massively accelerated this exponential process, and the cost decreased even faster than before, hitting just $10K in 2012.

sequencingcosts

The dramatic, exponential gains in price/performance of sequencing technology have unleashed a tidal wave of sequence data.

As economists are fond of saying, when the price falls, more is demanded.  As a result of this massively reduced sequencing price, many more partial and complete genomes are being sequenced than ever before.  The dramatic, exponential gains in price/performance of sequencing technology have unleashed a tidal wave of sequence data.