The past two decades have been witness to a bleak and implacable trend in the pharmaceutical industry. Investment in pharmaceutical R&D has been rising while the number of new drugs getting approval has actually declined over the same period. In other words, the industry is spending ever more money only to see the returns on its investments get ever smaller. By recent estimates, the current average cost of bringing a new drug to market is about $1.2 billion. In 2007 at an AAAS science and technology forum, William Haseltine, the founder of Human Genome Sciences said “What I see is an industry in very deep trouble.”, supporting his statement by reference to the industry statistics compiled that year, showing that the annual number of new drug approvals was down substantially, despite a 10-fold increase in R&D spending over the same period.
To put this in perspective, we have reached the point at which the average pharmaceutical company needs its new drug to be a billion dollar molecule just to recover the costs of its development! So how on earth did it come to this?
As somebody much smarter (and wittier) than me once noted, the truth is never pure and rarely simple, and there are probably no easy answers to this question. During this period of declining returns on investment, there have however been some other trends in the industry that I believe to be worthy of consideration.
Interestingly, this period has seen an effective doubling of the amount of scientific data gathered each year. Thanks to incredible advances in laboratory instrumentation and computing, both the scope and the resolution of the data that can be captured in the laboratory, have increased to the point at which scientists working on data-intensive projects must routinely deal with terabytes of data. And while having more and better data is always valuable, a huge problem facing scientists right now, is that the technology for gathering and storing data has outstripped our ability to assimilate it and to turn it into real knowledge. In some sense, we “know” more and more about the biological systems we study, yet far from helping us to develop new and better therapies, the divergence between our investment in collecting all of this data and the productivity that results from it, continues to grows larger with each passing year.
All great truths begin as blasphemies
George Bernard Shaw
In tandem with this technology-driven surge in our ability to collect more and better data, there has been an accompanying movement in the life sciences towards the kind of phenomenological and data-driven approaches that these new technologies have enabled. It seems that every area of biology now has its own ‘omics, in spite of the fact that the much-vaunted potential of the original ‘omic field of genomics, still has yet to live up to its promise. Hot on the heels of The Human Genome Project, some luminaries in the field were making predictions such as a cure for cancer within a couple of years, and yet here we are a decade or more later, still waiting for the “magic bullet” therapies that our “knowledge” of the human genome was supposed to unleash.
Now don’t get me wrong, the data gathered in the course of The Human Genome Project is invaluable, but it is still data, not knowledge. In retrospect, we can now see that what was perhaps not fully understood at the time, was how incomplete our real knowledge of the human genome was (and still is). Turning data into real knowledge requires an intellectual framework to organize and make sense of it. To use an extreme analogy, even if we could map the connections between every neuron in a human brain, this vast data set would not be sufficient by itself to explain anything much about human consciousness. Similarly, the human genome project has yielded vast volumes of sequence data, yet we still do not have the means to translate most of this information into the kind of useful insight that would allow us to make better drugs. In spite of the hyperbole surrounding the project, even at the time there were those who cautioned against unrealistic expectations.
Do what you’ve always done and
you’ll get what you’ve always gotten
It is my conviction that one of the major reasons for the stagnation in real progress in the pharmaceutical industry over the last couple of decades, has been an overly heavy reliance on the kind of data-driven approaches characterized by the ‘omics fields and discovery strategies such as high throughput screening, to the detriment of more knowledge-driven approaches. This is not to say that these data-driven approaches have not had their successes, nor that the choice to pursue these approaches is entirely unjustified. They are a response to the daunting challenge of tackling the vast complexity of biological systems and the product of considerable optimism about the capabilities of the new technologies that make these approaches possible. Rather than trying to really understand the system, these approaches aim to sidestep biological complexity by the outwardly more pragmatic approach of, well … trying stuff to see what works.
The declining success rates for new drugs offers little comfort to those who feel that the industry is still on a good course. Only 15% of biologics obtain approval compared to a mere 7% of traditional small molecule drugs. These figures also fail to take into account the incredible costs associated with “successful” drugs that actually reach the market and start generating revenue, only to turn into money pits themselves as a result of safety issues that time and patient numbers start to make evident. The list of drugs like Vioxx, Avandia and Accutane that have become household names for all of the wrong reasons, continues to grow, with lawsuit damages and settlements compounding the already burgeoning costs of drug development.
Daunting though it might be to contemplate tackling the complexity of biological systems head-on, it is my belief that a greater emphasis on knowledge-driven approaches would be of enormous benefit to the pharmaceutical industry. A far more rigorous and thorough characterization of a new drug is possible when it is accompanied by a mechanistic understanding of the underlying biology, and of the impact of the drug on the activity of the biological networks from which the “biology” of the studied system is an emergent property.
Computational modeling and simulation approaches that are much more in the mainstream of research in other fields such as physics and engineering, offer the potential to provide the kind of intellectual frameworks mentioned earlier, with which biological data can be organized and explained. Until quite recently, biological modeling has been severely limited in scope and resolution by virtue of the difficulties of describing complex biological systems in the language of traditional mathematics. New approaches to biological modeling however, enable life scientists to create models of the kind of scope and resolution that make it possible to perform meaningful simulations of relatively macroscopic biological processes.
There are impediments to the adoption of these new approaches. The great majority of senior executives in the pharmaceutical industry have had little or no exposure to the kind of systems-based thinking that underpins these approaches and are often in departments and divisions that have invested heavily in data-driven approaches such as high throughput screening. Nor is systems biology currently part of the main work flow for drug development in most pharmaceutical companies. There have been numerous attempts (and re-attempts) to integrate systems biology approaches into the drug development process, but any time the economics of the industry has required a little belt-tightening, most companies have tended to view systems biology departments from a kind of “last in, first out” perspective when looking for expenditure items that can be circled with the red pen.
Change is difficult and the heavy levels of investment in the current approaches to drug discovery and development make it even harder for the industry to change course. The life science sector’s greatest resource however, is its people, and their incredible level of talent and education are reason enough to be optimistic that the industry will in time, find both the will to change its current course, and the right people to spearhead its advance in a new direction.
The author Gordon Webster, has spent his career working at the intersection of biology and computation and specializes in computational approaches to life science research and development.
© The Digital Biologist | All Rights Reserved