The synthetic organization, part 1

pencil by Paul Worthington

I had dinner last night with my friend,
Mark Minie, who has a tremendous range of experience in immunology, high tech and biotech. While these are always wide ranging discussions last night’s had some special resonance (e.g. WBBA, indirect costs at the UW, the paradigm shifting activities of Craig Venter, the boundary moving work of WALL-E).

The confluence of these topics, along with a host of others, led me to reflect on the changes that will take place in organizations that are supposed to support and germinate creativity.

Industrial and academic approaches towards teaching, learning and understanding have been analytical for most of the last century – breaking complex processes down into simpler units and then working to understand them. These approaches have been very successful, producing many of the scientific advances we now enjoy.

But we are more and more entering a time where synthesis (the Greek words making up synthesis accurately describe just what it is) becomes paramount. Modern organizational approaches have not been as successful here, mainly because synthesis requires a social aspect that is often not supported in academia, or in much of industry, at least the biotech industry.

Analysis can be accomplished by a single lecturer talking to a large group, breaking down complex knowledge into bite-size bits for the students. Synthesis is almost the reverse, the bite-size portions are built back up, to be used by the group to create knowledge. Few academic organizations can accomplish this easily.

Venter, throughout most of his career, has taken synthetic approaches to solving scientific questions. Others were isolating single DNA fragments individually from entire genomes(some with genes and some without), laboriously determining where on the chromosome the sequence resided, then what the actual sequence was in detail and then trying to determine just what the sequence did, if anything.

Venter cloned large numbers of gene fragments directly, rapidly sequenced them, then used computer programs to put all the pieces together and went from there.

He built up his knowledge from many different, small pieces in order to understand. His shotgun approaches revolutionized the manner by which the human genome was characterized and still has ramifications today.

But his approach, putting the pieces together to determine the whole, was met with a lot of controversy. Synthesis was just not a model for doing biological research. His work, and that of many others who understood the importance of this approach, was disruptive and paradigm-shifting.

Most organizations are great at analysis but few seem to do synthesis well. Yet, many of the problems we face today (i.e. climate change, energy use, cancer therapies) are synthetic in nature (pun intended). How does one create an environment that fosters synthesis?

Pixar is an example. Harvard Business Online has a great article detailing just how different Pixar is from every other movie studio. It has been able to maintain its innovative culture and keep its creative talent from leaving, all while delivering nine hit movies in a row.

Read about Pixar and I will have more later…

Technorati Tags: , ,

Remembering is not enough

teacher
by foundphotoslj
Why is genetics so difficult for students to learn?:
[Via Gobbledygook]

This Sunday morning at the International Congress of Genetics, Tony Griffiths gave an interesting presentation with the above title. He identified 12 possible reasons why students have problems learning genetics. His main argument: students should learn concepts and principles and apply them creatively in novel situations (the research mode). Instead, too many details are often crammed into seminars and textbooks. In other words, students often stay at the lowest level of Bloom’s taxonomy, the remembering of knowledge. The highest level, the creation of new knowledge, is seldom reached, although these skills are of course critical for a successful researcher.

Andrew Moore from EMBO talked about the teaching of genetics in the classroom. He was concerned that a survey found that molecular evolution (or molecular phylogeny) was taught in not more than 30% of European classrooms. He gave some examples of how principles of genetics can be integrated into high school teaching.

Wolfgang Nellen explained his successful Science Bridge project of teaching genetics in the classroom, using biology students as teachers. Interestingly, they have not only taught high school students, but also journalists and – priests (German language link here). Politicians were the only group of people that weren’t interested in his offer of a basic science course.

Teaching is a very specific mode of transferring information, one that has its own paths. It is an attempt to diffuse a lot of information throughout an ad hoc community.

But it is often decoupled from any social networking, usually having just an authority figure disperse data, with little in the way of conversations. There is little analysis and even less synthesis, just Remembering what is required for the next test.

Bloom’s taxonomy is a nice measure of an individual’s progress through learning but it is orthogonal to the learning a community undergoes. Most instruction today is geared towards making the individual attain the highest part of the pyramid.

How does this model change in a world where social networking skills may be more important? What happens to Remembering when Google exists? When information can be so easily retrieved, grading for Remembering seems foolish.

The methods we use to teach at most centers of higher education are, at heart, based on models first developed over a century ago. It may be that they will have to be greatly altered before some of the real potential of online social networks will occur.

Technorati Tags: , ,

Adobe helps

adobe by annais
To Serif or Not To Serif? Regarding Online Readability:
[Via The Acrobat.com Blog]

There are myriad different opinions on what the best conditions are for reading text on a screen. Debates rage about whether or not to use serif fonts and how long a line of text should be. A surprisingly sensitive issue, and possibly without a clear resolution.

Here we’ve tried to delineate a few of the more widely accepted tips on how to optimize readability. Although they can be forsaken in the name of personal style, they’re generally considered the most conducive to easy reading. Here are a few key points plucked from various takes on the subject:

Regardless of medium, high contrast between type color and page color always contributes to optimal reading conditions. Not surprisingly, readers show a strong preference for black text on a white background (though it’s not strictly necessary; if you simply loathe the combination of white and black, any reasonably contrasting color duo will do). When in doubt, check your color scheme on Snook’s Color Contrast Check.
[More]

The Web is a different medium than paper or slides. While some things, such as contrast, remain the same, presenting text on the Web has different needs.

Its lower resolution, which allows pages to download faster, makes text harder to read. So size and font choice is important.

Remember that the user often has the ability to override the font choices that are made on the page, either by increasing the size or changing the font. So the choice of font is not as important as its presentation and of course its content.

Line spacing is another important aspect to understand. People read online from farther distances than they read a book. Poor text choices, in size, color or contrast, can make it very difficult for the content of the page to be assimilated.

Technorati Tags:

Medicine 2.0

x ray by D’Arcy Norman
Why Health or Medicine 2.0? [ScienceRoll]:
[Via The DNA Network]

While medicine is usually at the forefront of new technology for diagnosis and treatment, the patient-doctor interface has not followed. Perhaps that might change soon.

Some interesting statistics have recently been published. According to Pharma 2.0:

99% of physicians are online for personal or professional purposes
85% of offices have broadband
83% consider the Internet essential to their practice

So doctors are online.

At The Deloitte Center, you will find even more details about the web usage of health consumers. Yes, there will be much more patients who seek health-related information on the web and who want to communicate with their doctors via e-mail or Skype.

And patients are ready.

We have tools to work with:

And we have concepts.

So it will happen because patients and doctors need to have contact. The question is how long will it take?

Technorati Tags: ,

Using the net to fight disease

mosquito by tanakawho
Crawling the Internet to track infectious disease outbreaks:
[Via EurekAlert! – Infectious and Emerging Diseases]

(Public Library of Science) Could Internet discussion forums, listservs and online news outlets be an informative source of information on disease outbreaks? A team of researchers from Children’s Hospital Boston and Harvard Medical School thinks so, and it has launched a real-time, automated data-gathering system called HealthMap to gather, organize and disseminate this online intelligence. They describe their project in this week’s PLoS Medicine.
[More}

The site itself is an interesting mashup of online news sites with Google Maps. It does demonstrate how a clever mind kind use the internet to gain useful information. While there may be some bias in the types of diseases being reported, this sort of bottoms-up approach could have some real uses.

Healthmap.org: It Will Make You More Paranoid Than the Weather Channel [Mike the Mad Biologist]:
[Via ScienceBlogs : Combined Feed]

One of the things I find fascinating about the Weather Channel is that after watching it for a while, you actually start to worry about that cold front moving through some other part of the country. You become quite paranoid about things that won’t affect you. Well, I’ve got an even better way to drive yourself nuts about scary things that won’t affect you: HealthMap.org.
[More]

This is done by monitoring published reports from around the world so it is not really a substitute for hard epidemiology (which can take some time) but it is a nice adjunct. Following the Salmonella outbreak, for instance, really permits the rapid gathering of a lot of information.
New tool anyone can use to track disease outbreaks:
[Via Effect Measure]

While CDC and FDA struggle to figure out where the Salmonella saintpaul in a large multistate outbreak is coming from they are not being forthcoming about where it has gone. We know the case total but not much about who is getting sick, where and when. There is no good scientific or privacy reason not to release more information. It’s just the usual tendency to keep control. But some of the information is “out there” anyway, in news reports and other sources of information. People interested in disease outbreaks discovered years ago that this information could be harvested and disseminated to the public health community and in 2003 this informal system provided the first evidence of the SARS outbreak in Guangdong, China, weeks before there was any official confirmation. Many of us subscribe and use the no-cost ProMed Mail service, a pioneering effort by volunteer experts to collect information on disease outbreaks in people and animals worldwide, using official and unofficial sources. The ProMed concept has now been taken several steps further by a team of disease of surveillance experts at Boston’s Childrens Hospital. They use automated internet data-mining with some additional curating by human experts to provide a web-based breaking-news disease reporting system organized by disease agent, time and geographic location, all displayed on a map of the world. The system is free and without registration or subscription barriers. It started in prototype in 2006 and now gets about 20,000 unique visits a month, mainly from the public health community (for comparison, this blog gets 30,000 to 40,000 visits a month). The system, called HealthMap, is pretty impressive. It was just highlighted on the Wired Blog so its traffic is going to increase.
[More]

It will be interesting to see how organizations such as the CDC or FDA respond to this sort of analysis. As a news aggregator, though, this is pretty useful.
Infectious disease surveillance 2.0: Crawling the Net to detect outbreaks:
[Via LISNews – Librarian And Information Science News]

“July 8, 2008 (Computerworld) While recent outbreaks of salmonella in the U.S. have made headlines, an automated real-time system that scours the Web for information about disease outbreaks spied early reports in New Mexico about suspicious gastrointestinal illnesses days before the U.S. Centers for Diseases Control and Prevention (CDC) issued an official report on the problem.
The system, called HealthMap, is a free data-mining tool that extracts, categorizes, filters and links 20,000 Web-based data sources such as news sites, blogs, e-mail lists and chat rooms to monitor emerging public health issues. HealthMap, which is profiled in the July issue of the journal Public Library of Science Medicine and is open to anyone, was developed in late 2006 by John Brownstein and Clark Freifeld. Both men work in the informatics program at Children’s Hospital Boston.”
Read the full article in Computerworld at:
Infectious disease surveillance 2.0: Crawling the Net to detect outbreaks


It is kind of fun to check out what is happening around the world. There are nice pop-ups that can provide further information. I will have to play with this some more.

Technorati Tags: , ,

Paper discussions

conversations by b_d_solis
Reputation Matters:
[Via The Scholarly Kitchen]

A new (and flawed) study reveals that reputation matters. In fact, it’s core to scientific expression.
[More]

While the study may not be definitive, the ability to have a conversation on it helps tremendously. Research usually does not progress in a straight, ascending line. It switches back and forth, sometimes having to retrace its steps in order to find the right path.

Being able to discuss the results of a paper, what it did right and what it did wrong, is not something that usually has occurred in public. Now it can. I expect there to be more and more such discussions as time goes on.

Technorati Tags: ,

Oxford and OA

Oxford by Dimitry B
Oxford’s Open Book on Open Access:
[Via The Scholarly Kitchen]

Claire Bird provides a refreshingly agnostic and evidence-based approach to open access experiments with Oxford University Press.

University presses are also seeing pressure from Open Access approaches. There will be a period of turmoil as business models readjust. It looks like the Oxford University Press nicely articulates some of the hurdles to overcome.

Technorati Tags:

Browsing clouds, not papers

Commentary: Summarizing papers as word clouds:
[Via Buried Treasure]

The web provides entirely new avenues for decimating information and for visualizing it. It can be very time consuming to browse throught the literature, even though the most creative research often comes from the intervention of Serendipity (the Wikipedia article lists many examples).

Lars discusses some interesting numbers and comes up with an intriguing solution.

For use in presentations on literature mining, I did a back-of-the-envelope calculation of how much time I would be able to spend on each new biomedical paper that is published. Assuming that all papers were indexed in PubMed (which they are not) and that I could read papers 24 hours per day all year around (which I cannot), the result is that I could allocate approximately 50 seconds per paper. This nicely illustrates the point that no one can keep up with the complete biomedical literature.

When I discovered Wordle, which can turn any text into a beautiful word cloud, I thus wondered if this visualization method would be useful for summarizing a complete paper as a single figure. To test this, I extracted the complete text of three papers that I coauthored in the NAR database issue 2008. Submitting these to Wordle resulted in the three figures below (click for larger versions):


These sorts of rich figures could be very useful in a scientific setting, where being able to rapidly filter a large number of articles is important.

However, he does notice that this approach may not work for all articles, unless there are changes made, either in how the articles are written or in the software that creates the visuals.

…I think a large part of the problem is the splitting of multiwords; for example, “cell cycle” becomes two separate terms “cell” and “cycle”. Another problem is that words from different sections of the paper are mixed, which blurs the messages. These two issues could be solved by 1) detecting multiwords and considering them as single tokens, and 2) sorting the terms according to where in the paper they are mainly used.

And it would be easy to adapt the visuals to scientific needs and then be able to track if they are actually useful in practice.

Technorati Tags: ,

Just a taste

atomium by txd
What Social Media Does Best:
[Via chrisbrogan.com]
Before Chris starts his list he has this to say:

If you’re still looking for the best ways to explain to senior management or your team or your coworkers or your spouse what it is that social media does, why it’s different than the old way people used to use computers and the web, why people are giving two hoots about it, here are some thoughts to start out the conversation. I look at this mostly from a business perspective, but I suspect you’ll find these apply to nonprofits and other organizations as well. Further, as I’m fond of saying, social media isn’t relegated to the marketing and PR teams. It’s a bunch of tools that can be used throughout businesses, in different forms. Think on this.

I’m not going to list all of Chris’ points but here are a few to whet your appetite.

Blogs allow chronological organization of thoughts, status, ideas. This means more permanence than emails.

The organizational aspects of blogs are one of their most overlooked features.

Social networks encourage collaboration, can replace intranets and corporate directories, and can promote non-email conversation channels.

Email is not optimized for the sorts information transfer that it is used for. It also makes it impossible to really know just who should see the information. Social networks open this up and make it highly likely that the right information to get to the right people.

Social networks can amass like-minded people around shared interests with little external force, no organizational center, and a group sense of what is important and what comes next.

Ad hoc group creation is one of the best aspects of social networks. Rapid dispersal of information amongst a small, focussed group can occur independent of the need for everyone occupy similar space at the same time, as is done in meetings.

Blogs and wikis encourage conversations, sharing, creation.

Facilitating conversations increases information flow, speeding up the creativity cycle

Social networks are full of prospecting and lead generation information for sales and marketing.

This applies to a much wider group than just sales and marketing because at some level, everyone at an innovative organization needs to look for leads.

Blogs allow you to speak your mind, and let the rest of the world know your thought processes and mindsets.

The personal nature of many social media tools helps enhance the ability of a group to innovate rapidly, without the feeling of a restricting hierarchy that can diminish creativity.

Tagging and sharing and all the other activities common on the social Web mean that information gets passed around much faster.

Web 2.0 approaches make it much easier to find information, even though there is more of it.

Innovation works much faster in a social software environment, open source or otherwise.

The diffusion of innovation throughout an organization is really dependent on the social network of that group, how well connected it is, how people communicate, etc. Social media allows innovation to spread much more rapidly, decreasing the rate of diffusion and allowing the creativity cycle to crank much faster.

People feel heard.

This is a big one. Studies have shown that if people feel that their viewpoint is not heard and do not understand the rationale for a decision they become the most upset. Having a chance to be a part of the discussion can make a big difference, even if they do not agree with the final decision.

Technorati Tags: ,

Now we have article 2.0

ruby on rails by luisvilla*
I will participate in the Elsevier Article 2.0 Contest:
[Via Gobbledygook]

We have been talking a lot about Web 2.0 approaches for scientific papers. Now Elsevier announced an Article 2.0 Contest:

Demonstrate your best ideas for how scientific research articles should be presented on the web and compete to win great prizes!

The contest runs from September 1st until December 31st. Elsevier will provide 7.500 full text articles in XML format (through a REST API). The contestants that creates the best article presentation (creativity, value-add, ease of use and quality) will win prizes.

This is a very interesting contest, and I plan to participate. I do know enough about programming web pages that I can create something useful in four months. My development platform of choice is Ruby on Rails and Rails has great REST support. I will use the next two months before the contest starts to think about the features I want to implement.

I’m sure that other people are also considering to participate in this contest or would like to make suggestions for features. Please contact me by commenting or via Email or FriendFeed. A great opportunity to not only talk about Science 2.0, but actually do something about it.

While there are not any real rules up yet, this is intriguing. Reformatting a science paper for the Internet. All the information should be there to demonstrate how this new medium can change the way we read articles and disperse information.

We have already seen a little of this in the way journals published by Highwire Press are able to also contain links to papers published more recently, that cite the relevant paper. Take for example this paper by a friend of mine ULBPs, human ligands of the NKG2D receptor, stimulate tumor immunity with enhancement by IL-15.
Scroll to the bottom and there are not only links in the references, which look backwards from the paper, but also citations that look forward, to relevant papers published after this one.

So Elsevier has an interesting idea. Just a couple of hang-ups, as brought out in the comments to Martin’s post. Who owns the application afterwards? What sorts of rights do the creators have? This could be a case where Elsevier only has to pay $2500 but gets the equivalent of hundreds if not thousands of hours of development work done by a large group of people.

This works well for Open Source approaches, since the community ‘owns’ the final result. But in this case, it very likely may be Elsevier that owns everything, making the $2500 a very small price to pay indeed.

This could, in fact, spear an Open Source approach to redefining how papers are presented on the Internet. This is because PLoS presents its papers in downloadable XML format where the same sort of process as Elsevier is attempting could be done by a community for the entire communtiy’s enrichment.

And since all of the PLoS papers are Open Access, instead of the limited number that Elsevier decides to chose, we could get a real view of how this medium could boost the transfer of information for scientific papers.

I wonder what an Open Source approach would look like and how it might differ from a commercial approach?

*I also wonder what the title of the book actually translates to in Japanese?

Technorati Tags: , , ,