A very big challenge for biopharma

Loose coupling and biopharma:
[Via business|bytes|genes|molecules]

A few days ago, via the typical following of links that is typical of a good search and browse section on the interwebs, I chanced upon a discussion about a presentation given by Justin Gehtland at RailsConf. The talk was entitled Small Things, Loosely Joined, Written Fast and that title has been stuck in my head ever since. Funnily enough, what was in my head was not software, and web architectures, cause today, I consider that particular approach almost essential to building good applications and scalable infrastructures, and most people in the community seem to understand that (not sure about scientific programmers though). What I started thinking about was if that particular philosophy could be extended to the biopharma industry.

Without making direct analogies, but without suspending too much disbelief, one can imagine a world where drug development is not done in today’s model, but via a system consisting of a number of loosely coupled components that come together to combine cutting edge research and products (drugs) in a model that scales better and does a better, more efficient job of building and sustaining those products. One of the tenets of the loose coupling approach to scalable software and hardware is minimizing the risk of failure that is often a problem with more tightly coupled systems and in many ways the current blockbuster model is very much one where risk is not minimized and one failure along the path can result in the loss of millions of dollars. I have said in the past that by placing multiple smart bets, distributed collaborations and novel mechanisms (like a knowledge and technology exchange), we can reboot the biopharma industry, reducing costs and developer better drugs more efficiently. I don’t want to trivialize the challenge, the numerous ways in which the process can go wrong, and the vagaries of biology, but resiliency is a key design goal of high scale systems, and is one we need to build into the drug development process, one where the system chooses new paths when the original ones are blocked.

How could we build such a network model? I know folks like Stephen Friend have their ideas. Mine are ill formed, but data commons, distributed collaborations, and IP exchanges are a key component especially in an age where developing a drug is going to be a complex mix of disciplines, complex data sets and continuous pharmacavigilance. I can’t help but point to Matt Wood’s Into the Wonderful which does point to some of those concepts albeit from a computational perspective

[More]

Designing great and awesome tools for researchers to use will be critical for successful drug development. But there also has to be a cultural change in the researchers themselves and the organizations they inhabit.

One is that the tools have to work the way scientists need them to, not what works well for developers. This is actually pretty easy now and many tools are really starting to reflect the world views of researchers in biotech, who, more times that expected, are somewhat technophobic.

This leads to the second area- researchers often need active facilitation in order to take up these sorts of tools. They need someone they trust to actually help convince them why they should change their workflows. Most will not just try something new unless they can see clear benefits.

Finally, the last thing is better training for collaborative projects. Most of our higher education efforts for training researchers makes them less collaborative. They are taught to get publications for themselves in order to gain tenure. Plus, with the competition seen in science, letting others know about your work before publication can often be harmful Large labs with many people often can quickly catch up to a smaller lab and its work.

Like in the business world, being first to accomplish something can be overtaken by a larger organization. So, many researchers are trained to keep things close to the vest until they have drained as much reputation as possible form the work.

But many of the difficult problems today can not be solved by even a large lab. It can require a huge effort by multiple collaborators. Thus, there is a movement towards figuring out how to deal with this and assign credit.

Nature just published a paper by the Polymath Project, an open science approach to the discovery of an important math problem. They addressed the problem of authorship and reputation:

The process raises questions about authorship: it is difficult to set a hard-and-fast bar for authorship without causing contention or discouraging participation. What credit should be given to contributors with just a single insightful contribution, or to a contributor who is prolific but not insightful? As a provisional solution, the project is signing papers with a group pseudonym, ‘DHJ Polymath’, and a link to the full working record. One advantage of Polymath-style collaborations is that because all contributions are out in the open, it is transparent what any given person contributed. If it is necessary to assess the achievements of a Polymath contributor, then this may be done primarily through letters of recommendation, as is done already in particle physics, where papers can have hundreds of authors.

We need to come up with better ways to design useful metrics for those that contribute to such large projects. Researchers need to know they will get credit for their work. As we do this, we need to also help train them for better collaborative work, because that is probably what most of them will be doing.

Technorati Tags: , ,

Leave a Reply