We are all scientists
Data visualization is cool. It’s also becoming ever more useful, as the vibrant online community of data visualizers (programmers, designers, artists, and statisticians — sometimes all in one person) grows and the tools to execute their visions improve.
Clark’s latest work shows much promise. He’s built four engines that visualize that giant pile of data known as Twitter. All four basically search words used in tweets, then look for relationships to other words or to other Tweeters. They function in almost real time.
“Twitter is an obvious data source for lots of text information,” says Clark. “It’s actually proven to be a great playground for testing out data visualization ideas.” Clark readily admits not all the visualizations are the product of his design genius. It’s his programming skills that allow him to build engines that drive the visualizations. “I spend a fair amount of time looking at what’s out there. I’ll take what someone did visually and use a different data source. Twitter Spectrum was based on things people search for on Google. Chris Harrison did interesting work that looks really great and I thought, I can do something like that that’s based on live data. So I brought it to Twitter.”
His tools are definitely early stages, but even now, it’s easy to imagine where they could be taken.
Take TwitterVenn. You enter three search terms and the app returns a venn diagram showing frequency of use of each term and frequency of overlap of the terms in a single tweet. As a bonus, it shows a small word map of the most common terms related to each search term; tweets per day for each term by itself and each combination of terms; and a recent tweet. I entered “apple, google, microsoft.” Here’s what a got:
Right away I see Apple tweets are dominating, not surprisingly. But notice the high frequency of unexpected words like “win” “free” and “capacitive” used with the term “apple.” That suggests marketing (spam?) of apple products via Twitter, i.e. “Win a free iPad…”.
I was shocked at the relative infrequency of “google” tweets. In fact there were on average more tweets that included both “microsoft” and “google” than ones that just mentioned “google.”
Social media sites provide a way to not only map human networks but also to get a good idea of what the conversations are about. Here we can see not only how many tweets are discussing apple, microsoft and goggle but the combinations of each.
Now, the really interesting question is how ti really get at the data, how to examine it in order to discover really amazing things. This post examines ways to visually present the data.
Visuals – those will be some of the key revolutionary approaches that allow us to take complex data and put it into terms we can understand. These are some nice begining points.
Photo outside the Panton Arms pub in Cambridge, UK, licensed to the public under Creative Commons Attribution-ShareAlike by jwyg (Jonathan Gray).
Today marked the public announcement of a set of principles on how to treat data, from a legal context, in the sciences. Called the Panton Principles, they were negotiated over the summer between myself, Rufus Pollock, Cameron Neylon, and Peter Murray-Rust. If you’re too busy to read them directly, here’s the gist: publicly funded science data should be in the public domain, full stop.
Leading scientists say that the recent controversies surrounding climate research have damaged the image of science as a whole.
President of the US National Academy of Sciences, Ralph Cicerone, said scandals including the “climategate” e-mail row had eroded public trust in scientists.
He said that this crisis of public confidence should be a wake-up call for researchers, and that the world had now “entered an era in which people expected more transparency”.
“People expect us to do things more in the public light and we just have to get used to that,” he said. “Just as science itself improves and self-corrects, I think our processes have to improve and self-correct.”
It is important for Federally funded research to be in the public domain. But, Universities, who hope to license the results of this research, and corporations, who will not as likely commercialize a product if they can not lock up the IP, Both of these considerations must be accounted for if we want to translate basic research into therapies or products for people.
So, as the Principles seem to indicate, most of this open data should happen AFTER publication, so this would give the proper organizations to make sure they have any IP issues dealt with.
But what about unpublished data? What about old lab notebooks? The problem supposedly seen now has nothing to do with data that was published. It has to do with emails between scientists. Is this relevant data that should be made public for any government funded research?
Who determines which data are relevant or not?
And what about a researcher’s time? More time in front of the public, more time filling out FOIs, more time not doing research in the first place.
The scientific world is headed this way but how will researcher’s adjust? There will have to be much better training of effectively communicating science to a much wider audience than most scientists are now comfortable with.
[Crossposted at A Man with a PhD]
4-dimensional live cell imaging has gone from being a rare technique used only by cutting-edge laboratories to a mainstream method in use everywhere. While more and more labs are becoming comfortable with the equipment and protocols needed to collect imaging data, performing detailed analyses is often problematic. The application of computational image processing is still far from routine. Researchers need to determine which measurements are necessary and sufficient to characterize a system and they need to find the appropriate tools to extract these data. In Computational Image Analysis of Cellular Dynamics: A Case Study Based on Particle Tracking, Gaudenz Danuser and Khuloud Jaqaman introduce the basic concepts that make the application of computational image processing to live cell imaging data successful. As one of the featured articles in December’s issue of Cold Spring Harbor Protocols, it is freely accessible for subscribers and non-subscribers alike.
The article is adapted from the new edition of Live Cell Imaging: A Laboratory Manual, now available from CSHL Press.
My first year as a biochemistry graduate student, one of the classes simply dealt with the analytical technologies we would be using. Things like NMR, UV spectroscopy, circular dichroism, fluorescence and X-ray crystallography. They would help us understand the properties of isolated biological molecules
This paper gives a great view of some of the new analytical approaches that examine entire living cells, not just isolated molecules. Now it looks like students will also have to get some firm understanding of image analysis. There will be some really interesting results from these sorts of technologies. The conclusions provide insights into the promise and the problems:
Computational image analysis is a complex yet increasingly central component of live cell imaging experiments. Much has to be done to make these techniques useful for cell biological investigation. First, algorithms must be transparent, not necessarily at the level of the code but in terms of their sensitivity to changing image quality and the effect that control parameters have on the output. Second, the design of imaging experiments must be tightly coupled to the design of the analysis software. All too often, images are taken without careful consideration of the subsequent analysis and are forwarded to the computer scientist to retrieve information from the images. To avoid these problems, communication must be initiated early on, and experiments must be designed with the appreciation that data acquisition and analysis are equivalent components. Third, software development and application require careful controls, as is customary for molecular cell biology experiments. This article provides a brief introduction to the ideas useful for implementing such controls. Hopefully, the cell biological literature will include a more extensive discussion of the measures taken to substantiate the validity of results from image analysis. On the other hand, manual image analysis should no longer be an option. As discussed in this article, manual analyses fall short in consistency and completeness, two essential criteria underlying the validity of a scientific model derived from image data.
While the results can be amazing, there needs to be close collaboration between the different researchers involved. Because very few people will have all the expertise necessary for success. This tight coupling of researchers with vastly different backgrounds and focus (i.e. cell biology and bioinformatics) is a relative new aspect of modern biological research.
There may be slowing of this coupling in some labs but the successful results by those that can accomplish this type of collaboration will rapidly overtake those who take a slower course. As I mentioned below, large collaborations may be a big part of the published record as we move forward.
Success is intimidating. When we compete against someone who’s supposed to be better than us, we start to get nervous, and then we start to worry, and then we start to make stupid mistakes. That, at least, is the lesson of a new working paper by Jennifer Brown, a professor at the Kellogg school.
Brown demonstrated this psychological flaw by analyzing data from every player in every PGA tournament from 1999 to 2006. The reason she chose golf is that Tiger Woods is an undisputed superstar, the most intimidating competitor in modern sports. (In 2007, Golf Digest noted that Woods finished with 19.62 points in the World Golf Ranking, more than twice as many as his closest rival. This meant that “he had enough points to be both No. 1 and No. 2.”) Brown also notes that “golf is an excellent setting in which to examine tournament theory and superstars in rank-order events, since effort relates relatively directly to scores and performance measures are not confounded by team dynamics.” In other words, every golfer golfs alone.
Despite the individualistic nature of the sport, the presence of Woods in the tournament had a powerful effect. Interestingly, Brown found that playing against Woods resulted in significantly decreased performance. When the superstar entered a tournament, every other golfer took, on average, 0.8 more strokes. This effect was even more pronounced when Woods was playing well. Based on this data, Brown calculated that the superstar effect boosted Woods’ PGA earnings by nearly five million dollars.
One of the things I have seen in great athletes I have known is, for want of a better term, a lack of self-awareness. They just do, They don’t think about it too much.
For example, they did not worry as much about striking out as I did. I had a talented bat, which allowed me to get a bat on almost anything. But I was not disciplined enough. If I had two strikes I would go after anything, anywhere because I did not want to strike out. I’d rather ground out by hitting a bad pitch than allow a called third strike. I was more worried about the humiliation of that one event than the larger strategic aspects.
I hated losing and would replay all the parts where if only I had done something different, then the result would have been a win. This was not something I really saw with the really great players. They just moved on, seemingly riding the vagaries of the sport with a wonderful adeptness I envied.
So it is nice to see that at the highest levels, when they really are competing with physical peers, the numbers indicate that they feel the same way. They think too much.
Now, another part of this is that once in a group of peers, such as the PGA, most people eventually find a relative plateau of effort and worry. That is, the pressure of the tour selects for golfers that can at least deal with the pressure of the Tour itself. And many golfers, week to week, do not have to really directly compete with Woods. They are in the middle, competing with the other golfers that they are used to seeing in the middle also. Familiarity means not too many worries, So they are not too worried and are not thinking too much to hurt their chances.
They find their own level and can be successful there.
It is when they have an extraordinary week, where they now move up into the elite group where overthinking can cause a problem. And, in some ways, being able to move away from the overthinking might allow them to stay in that elite group.
This sort of worry happens in many facets of life. The worry about our position, whether we are really good enough. It happens almost anytime we enter a truly novel situation.
I saw this first hand when I entered CalTech. The entire Freshman year was entirely pass/fail. Every class. Not only did this allow people to experiment and try a lot of different classes but it also provided a modicum of time to find your level without having to directly compete with others for GPA.
It removed a lot of pressure and worry. Most students had never had to think about studying in High School. They just did it. Like great athletes.
Now they were competing with other peers in ways that were completely novel and worrisome. By removing the pressure of grades, CalTech sought to ameliorate these worries. Not all the way but it was one less thing. We were less likely to choke and more likely to calm down as the novelty wore off.
So, in that first year I found a balance. I saw that there were guys that never seemed to do any homework, yet got better scores than me (Yes, they still had grades on tests, essays and such. It just did not matter for the GPA). I found that no matter how hard I studied, I just was not going to pass them. And that was fine. I saw where I fell by doing the work I was capable of.
I recognized that I was not going to be one of the elites at CalTech. And I could be okay with that. Giving us that first year to find our place in the crowd was one of the most significant things CalTech did.
And then, being a smart guy, I figured out ways to take classes that played to my strengths, used the knowledge I gained to raise my GPA every year, so that I was able to graduate with honors.
But having that break the first year permitted me to gather myself in ways that being dumped directly into competition with other might have broken me. Like a golfer who finds that there is a particular course that plays to their strengths.
It is a lesson I have held my whole life. So many organizations are designed to break people, taking only those who survive and making them the leaders, champions, etc. But that is so wasteful because there are so many others who, if given a break, a chance to find their own level, could perform quite well.
One of my advisors once said he purposefully created an environment of competition between those who work in his lab ‘The cream will rise to the top.’ Well, the cream will always rise but the process makes it curdled, And you waste so much that could have been so useful. Too many people dropped out of graduate programs, ones who could have been very good scientists, simply because the system worked by breaking its members.
It was designed to cast out those who ‘choked’. CalTech’s approach was to support everyone until they could figure out where they needed to be. Just like the middling golfers. They might not win very often but they provide some really exciting golf. Because they really are very, very good when compared to the rest of us.
We need better processes in scientific education so that more than only the elite make it through. Just as not every lawyer needs to plead in front of a jury, not every graduate student needs to get a job in academia. There are so many places where a well-trained scientist is needed.
Technorati Tags: Science
[Crossposted at A Man with a PhD]
Discussion forums built around academic journal articles haven’t seen much usage from readers. Lessons learned from the behavior of sports fans may provide some insight into the reasons why.
The scientific discussions that many researchers have found the most productive are often those sitting around a table in a informal setting, like a pub. These discussions are often wide-ranging and very open. They often produce really innovative ideas, which get replicated on cocktail napkins.
Some of the best ideas in scientific history can be found on such paper napkins. Simply allowing comments on a paper does not in any way replicate this sort of social interaction. But there already online approaches that do. We call them blogs.
Check out the scientific discussions at RealClimate, ResearchBlogging or even Pharyngula. Often the scientific discussions replicate what is seen in real life, with lots of open discussion about relevant scientific information.
If journals want to create participatory regions in their sites, they might do well to mimic these sorts of approaches. David Croty at Cold Spring Harbor has such a site. Although it has not reached the popularity of RealClimate, it is a nice beginning.
I would think that research associations, with an already large audience of members, would have an easier time creating such a blog, one that starts by discussing specific papers but is open to a wide ranging, semi-directed conversation.
A story last week about the Obama administration committing more than $3 billion to smart grid initiatives caught my eye. It wasn’t really an unusual story. It seems like every day features a slew of stories where leaders commit billions to new geographies, technologies, or acquisitions to demonstrate how serious they are about innovation and growth.
Here’s the thing — these kinds of commitments paradoxically can make it harder for organizations to achieve their aim. In other words, the very act of making a serious financial commitment to solve a problem can make it harder to solve the problem.
Why can large commitments hamstring innovation?
First, they lead people to chase the known rather than the unknown. After all, if you are going to spend a large chunk of change, you better be sure it is going to be going after a large market. Otherwise it is next to impossible to justify the investment. But most growth comes from creating what doesn’t exist, not getting a piece of what already does. It’s no better to rely on projections for tomorrow’s growth markets, because they are notoriously flawed.
Big commitments also lead people to frame problems in technological terms. Innovators spend resources on path-breaking technologies that hold the tantalizing promise of transformation. But as my colleagues Mark Johnson and Josh Suskewicz have shown, the true path to transformation almost always comes from developing a distinct business model.
Finally, large investments lead innovators to shut off “emergent signals.” When you spend a lot, you lock in fixed assets that make it hard to dramatically shift strategy. What, for example, could Motorola do after it invested billions to launch dozens of satellites to support its Iridium service only to learn there just wasn’t a market for it? Painfully little. Early commitments predetermined the venture’s path, and when it turned out the first strategy was wrong — as it almost always is — the big commitment acted as an anchor that inhibited iteration.
One problem of too much money is that bad ideas get funding also. In fact, there are often many more incremental plans than revolutionary ones. They soak up a lot of time and money.
Plus they create the “We have to spend this money” rather than “Where are we going to get the money to spend?”
Innovations often result in things that save money. But they are often riskier to start with. So how to recognize them and get them the money they need, but not too much?
Encouraging people to work on ‘back burner’ projects in order to demonstrate the usefulness of the approach is one way. Careful vetting can help determine whether it can be moved to the front burner or not.
Part of any innovator’s dilemma is balancing the innovative spirit with sufficient funding to nurture that spirit, without overwhelming the innovator with the debit of too much cash.
by Nima Badiey
[Via The Scholarly Kitchen]
The NIH spends $12.2 million funding a social network for scientists. Is this any more likely to succeed than all the other recent failures?
In order to find an approach that works, researchers often have to fail a lot. That is a good thing. The faster we fail, the faster we find what works. So I am glad the NIH is funding this. While it may have little to be excited about right now, it may get us to a tool that will be useful.
As David mentions, the people quoted in the article seem to have an unusual idea of how researchers find collaborators.
A careful review of the literature to find a collaborator who has a history of publishing quality results in a field is “haphazard”, whereas placing a want-ad, or collaborating with one’s online chat buddies, is systematic? Yikes.
The NIH site, as described, also fails to recognize that researchers will only do this if it helps their workflow or provides them a tool that they have no other way to use. Facebook is really a place for people to make online connections with others, people one would have no other way to actually find.
But we can already find many of the people we would need to connect to. What will a scientific Facebook have that would make it worthwhile?
Most social networking tools initially provide something of great usefulness to the individual. Bookmarking services, like CiteULike, allow you to access/sync your references from any computer. Once someone begins using it for this purpose, the added uses from social networking (such as finding other sites using the bookmarks of others) becomes apparent.
For researchers to use such an online resource, it has to provide them new tools. Approaches, like the ones being used by Mendeley or Connotea, make managing references and papers easier. Dealing with papers and references can be a little tricky, making a good reference manager very useful.
Now, I use a specific application to accomplish this, which allows me to also insert references into papers, as well as keep track of new papers that are published. Having something similar online, allowing me access from any computer, might be useful, especially if it allowed access from anywhere, such as my iPhone while at a conference.
If enough people were using such an online application then there could be added Web 2.0 approaches that could then be used to enhance the tools. Perhaps this would supercharge the careful reviews that David mentions, allowing us to find things or people that we could not do otherwise.
There are still a lot of caveats in there, because I am not really convinced yet that having all my references online really helps me. So the Web 2.0 aspects do not really matter much.
People may have altruistic urges, the need to help the group. But researchers do not take up these tools because they want to help the scientific community. They take them up because they help the researcher get work done.
Nothing mentioned about the NIH site indicates that it has anything that I currently lack.
Show me how an online social networking tool will get my work done faster/better, in ways that I can not accomplish now. Those will be the sites that succeed.
[UPDATE: Here is post with more detail on the possibilities.]
A few days ago, via the typical following of links that is typical of a good search and browse section on the interwebs, I chanced upon a discussion about a presentation given by Justin Gehtland at RailsConf. The talk was entitled Small Things, Loosely Joined, Written Fast and that title has been stuck in my head ever since. Funnily enough, what was in my head was not software, and web architectures, cause today, I consider that particular approach almost essential to building good applications and scalable infrastructures, and most people in the community seem to understand that (not sure about scientific programmers though). What I started thinking about was if that particular philosophy could be extended to the biopharma industry.
Without making direct analogies, but without suspending too much disbelief, one can imagine a world where drug development is not done in today’s model, but via a system consisting of a number of loosely coupled components that come together to combine cutting edge research and products (drugs) in a model that scales better and does a better, more efficient job of building and sustaining those products. One of the tenets of the loose coupling approach to scalable software and hardware is minimizing the risk of failure that is often a problem with more tightly coupled systems and in many ways the current blockbuster model is very much one where risk is not minimized and one failure along the path can result in the loss of millions of dollars. I have said in the past that by placing multiple smart bets, distributed collaborations and novel mechanisms (like a knowledge and technology exchange), we can reboot the biopharma industry, reducing costs and developer better drugs more efficiently. I don’t want to trivialize the challenge, the numerous ways in which the process can go wrong, and the vagaries of biology, but resiliency is a key design goal of high scale systems, and is one we need to build into the drug development process, one where the system chooses new paths when the original ones are blocked.
How could we build such a network model? I know folks like Stephen Friend have their ideas. Mine are ill formed, but data commons, distributed collaborations, and IP exchanges are a key component especially in an age where developing a drug is going to be a complex mix of disciplines, complex data sets and continuous pharmacavigilance. I can’t help but point to Matt Wood’s Into the Wonderful which does point to some of those concepts albeit from a computational perspective
Designing great and awesome tools for researchers to use will be critical for successful drug development. But there also has to be a cultural change in the researchers themselves and the organizations they inhabit.
One is that the tools have to work the way scientists need them to, not what works well for developers. This is actually pretty easy now and many tools are really starting to reflect the world views of researchers in biotech, who, more times that expected, are somewhat technophobic.
This leads to the second area- researchers often need active facilitation in order to take up these sorts of tools. They need someone they trust to actually help convince them why they should change their workflows. Most will not just try something new unless they can see clear benefits.
Finally, the last thing is better training for collaborative projects. Most of our higher education efforts for training researchers makes them less collaborative. They are taught to get publications for themselves in order to gain tenure. Plus, with the competition seen in science, letting others know about your work before publication can often be harmful Large labs with many people often can quickly catch up to a smaller lab and its work.
Like in the business world, being first to accomplish something can be overtaken by a larger organization. So, many researchers are trained to keep things close to the vest until they have drained as much reputation as possible form the work.
But many of the difficult problems today can not be solved by even a large lab. It can require a huge effort by multiple collaborators. Thus, there is a movement towards figuring out how to deal with this and assign credit.
Nature just published a paper by the Polymath Project, an open science approach to the discovery of an important math problem. They addressed the problem of authorship and reputation:
The process raises questions about authorship: it is difficult to set a hard-and-fast bar for authorship without causing contention or discouraging participation. What credit should be given to contributors with just a single insightful contribution, or to a contributor who is prolific but not insightful? As a provisional solution, the project is signing papers with a group pseudonym, ‘DHJ Polymath’, and a link to the full working record. One advantage of Polymath-style collaborations is that because all contributions are out in the open, it is transparent what any given person contributed. If it is necessary to assess the achievements of a Polymath contributor, then this may be done primarily through letters of recommendation, as is done already in particle physics, where papers can have hundreds of authors.
We need to come up with better ways to design useful metrics for those that contribute to such large projects. Researchers need to know they will get credit for their work. As we do this, we need to also help train them for better collaborative work, because that is probably what most of them will be doing.
[Via Xconomy ]
Yesterday, we provided a rundown of the six hallmarks of a successful biotech company, according to Christopher Henney, the biotech pioneer who co-founded three of Seattle’s top biotechs—Immunex, Icos, and Dendreon. He made his remarks to an audience of about 100 investing professionals at the CFA Society meeting on Oct. 8 in Seattle.
Today, we follow up with the five red flags Henney advised investors to watch for when they evaluate biotech investments. Here’s what he singled out as warning signs:
—Top management without a scientific background. It’s not impossible for a biotech to succeed with a non-scientist at the helm, Henney said, but a smart investor must ask this non-scientific manager where the science comes from at the company. “The good answer would be, ‘It comes from my team of wonderful scientists who I recruited.’” A bad answer would be something like, “It comes from my scientific advisory board, which has two Nobel Laureates.” Henney added, “If you need to make an appointment to meet the guy who’s bringing you your science, then you don’t have much of a business.”
Henney wanted to make sure he wasn’t making a broadside attack against all non-scientific managers. One of his favorite biotech CEOs isn’t a scientist, but he adds, “You wouldn’t know it from talking to him.”
—No worries. An investor should ask what the management loses sleep over. “If they say, ‘I sleep like a baby,’ that’s a big red flag,” Henney said. All companies have their problems, and top management had better know them inside out.
—Hard-to-understand science. Ask the management to explain the science of their product in detail. “If they say something like the science is hard to explain, they can’t really explain it to you, that’s a big red flag.”
—Geographic remoteness. This provides some insight into Henney’s thinking on why two of the companies for whom he serves as chairman—Oncothyreon and AVI Biopharma—recently moved their headquarters from Edmonton, Canada, and Portland, OR, respectively, to the Seattle area. “You need a quorum of players,” Henney said. “You need access to talent, you need to be able to recruit people.” Seattle has more talent than the other places, and an ability to recruit more people, he said.
—Too many VCs. The board should be loaded with people that have experience running companies. “You shouldn’t have a board full of venture capitalists,” Henney said.
—Family members in key roles. “These aren’t family businesses. If you see a board dominated by siblings, or a couple of siblings in key management roles, I’d run, not walk.”
I have always had a soft spot in my heart for Chris Henney. Those of us at Immunex in the early days all have our hilarious Chris stories, often involving his confusion with our very weird phone system.
But when I first started at Immunex, it took a month to get our house closed. Because of the situation, I ended up staying in a fairly cheap hotel out in Issaquah (perhaps 15 miles away from downtown Seattle). This was an unexpected expense of moving and caused a little bit of hardship paying for before my first paychecks arrived.
So I talked with Chris and he got Immunex to pay for some of the cost as part of my moving expenses. It told me a lot about the culture at the company when a newly hired scientist could walk up to one of the founders and ask for money. And get it.
Most of his red flags deal with the inability to communicate the scientific reasons for the existence of the company. No science in the management’s background, no worries and nonsensical science all indicate a fatal lack of understanding.
If you do not understand the fundamental science in a deep way, you can not tell the story. An inability to tell the story in a way that resonates will make it impossible to shake some money loose.
This is not about hype, which is a snake-oil salesman’s approach to selling anything. In fact, hype indicates a total misunderstanding of the science because most biotechs are founded more on hope and strong egos than anything really solid. That is, often the company must begin development of the science for commercialization before the complete knowledge of the system is extant.
This means that the path to a commercial product will be littered with false starts and the company’s management had better understand the science enough to surmount these roadblocks.
I’m now going to tell a story. Perhaps some of the details may be a little off but it does illustrate why having a deep understanding of the science can be so important.
For example, Immunex, in large part, was started based on the hope that a molecule called interleukin-2 would be very important for medical protocols. See, IL-2 was also known as T-cell Growth factor and had been shown by some of Henney and Gillis’ work, to be absolutely required for the growth of T-cells in culture.
Now, T-cells are incredibly important in fighting off a whole slew of diseases, including cancer. So being able to manipulate T-cell levels seems like it would be a very good thing to be able to do. So, let’s clone IL-2 and produce it in large amounts. The we can sell it a s a therapeutic for a wide range of illnesses.
Turns out the IL-2 is not as useful as originally hoped. Not to say it does not have important uses, even commercial ones. But at the time, it seems like a critical molecule, one that would be core to our repertoire of tools.
Yet, when mice are bred which have no functional IL-2, they actually appear relatively normal. That is, the lack of any IL-2 is not fatal to the mice. There are some interesting immunological irregularities that have led to some interesting observations. But IL-2 is not absolutely required for a viable mouse. The mouse and its immune system find some other way to deal with T-cells.
Luckily for Immunex, our founders, and the scientists they recruited, had a deep understanding of the science and this allowed us to delve quite quickly into other aspects of the immune system as we worked on IL-2. For the same technologies that could clone IL-2 could be used to clone a wide range of immunoregulatory proteins.
By the time it could be shown that IL-2, while an important molecule, would not be the huge commercial product originally envisioned, we had a handful of other proteins cloned which presented even greater possible riches than IL-2. This deep understanding of the science eventually led to Enbrel.
Thus, a critical reason to have a deep understanding of the science is that no research venture, especially commercial ones, goes according to plan. But, if you understand the science, you can often be adaptable enough to find a successful solution.
Those that do not fundamentally understand the science will just be stuck when the inevitable roadblock appears. Then everyone, including the investors, are just stuck.
Science: Retrovirus Detected In Patients With Chronic Fatigue Syndrome-But Does It Cause the Disease?
As many as two-thirds of patients with chronic fatigue syndrome carry an infectious retrovirus in their blood cells, according to new research published in Science. But the study’s authors say it’s not clear whether the virus is the main cause or a co-conspirator in the disorder.
First, the interesting aspect of this story to me, since it illustrates how making connections can result in innovative science. This work discusses the correlation of a specific virus to CFS. This virus was first isolated in humans just a few years ago as a possible cause for a particularly virulent form of prostate cancer. How did these researchers make the connection between a virus from prostate cancer and CFS?
It turns out the virus-positive prostate cancers demonstrate an alteration in an anti-viral protein, RNase L. The CFS researchers happened to know that a similar defect was seen in CFS patients, so they just decided to see if the virus was present in their patients also. They had no reason to expect this to be the case but it was one of those connections that makes scientists go ‘Hummm.’
The data sure are exciting. The virus is Xenotrophic Murine Leukemia Virus-related Virus (XMLV). It is a retrovirus that can incorporate itself into the cellular DNA of infected people. Two-thirds of the people in the CFS cohort had detectible virus while less than 4% of the control group did. In addition, an even higher percentage of the CFS cohort had antibodies to the virus, demonstrating that they had been infected with the virus. They also showed that the virus in the plasma of infected people could continue to be infectious.
There is still a lot of work to be done to demonstrate that this virus is the cause of the disease. But we have made some real progress simply because of a seemingly random fact presented in a piece of research that ostensibly had no connection at all to CFS.
Some of the best work comes from making a connection to a bit of data that may appear to be inconsequential. Good social networks permit these bits of data to get to people that can actually do something with them.
Technorati Tags: Science