Category Archives: General

Part one from HarvardBusiness.org – 3 of 7 lessons

leader by Hamed Saber
Obama’s Seven Lessons for Radical Innovators:
[Via HarvardBusiness.org]
Obama’s campaign organization was different in many ways than any other one before. Mainly because of the very innovative way it was put together. It was actually quite entrepreneurial in its scope. It serves as an interesting model of how new online tools coupled with decentralized lines of communication can leverage the social connections of its employees and volunteers.

The idea of grassroots, bottom-up approaches has been used before. The GOP, in fact, was first to really use direct mail in the early 90s to keep its followers informed. But these organizations still retained a hierarchical, top-down approach, with decisions having to move up and down the chain of command. Decision-making was not decentralized and pushed out to the edges as was seen in Obama’s organization.

You can read about some of this as it trickles out into the media but this will be a case study for future organizations who want to innovate, to find answers to complex questions. As Exley states:

The “New Organizers” have succeeded in building what many netroots-oriented campaigners have been dreaming about for a decade. Other recent attempts have failed because they were either so “top-down” and/or poorly-managed that they choked volunteer leadership and enthusiasm; or because they were so dogmatically fixated on pure peer-to-peer or “bottom-up” organizing that they rejected basic management, accountability and planning. The architects and builders of the Obama field campaign, on the other hand, have undogmatically mixed timeless traditions and discipline of good organizing with new technologies of decentralization and self-organization.

It is a model for a type of organization that we will see more of in the coming years. A grocery store might not use with this model but it might work for a bio/pharmaceutical company. innovation often comes when control is pushed to the edges.

Let’s look what Haque discusses and the seven lessons. I know many of these are true because I have worked for an organization that had many of these traits. I know firsthand how innovative self-organizing companies can be, even when restricted for cash.

Barack Obama is one of the most radical management innovators in the world today. Obama’s team built something truly world-changing: a new kind of political organization for the 21st century. It differs from yesterday’s political organizations as much as Google and Threadless differ from yesterday’s corporations: all are a tiny handful of truly new, 21st century institutions in the world today.

Obama presidential bid succeeded, in other words, as our research at the Lab has discussed for the past several years, through the power of new DNA: new rules for new kinds of institutions.

Well, this may be overstating somethings but it must be said that Obama and his advisors put together an organization of several thousand employees and a budget of half a billion dollars that succeeded in ways that no Democrat has in 30 or 40 years. It has many of the hallmarks of an entrepreneurial business, not a political organization.

So let’s discuss the new DNA Obama brought to the table, by outlining seven rules for tomorrow’s radical innovators.

1. Have a self-organization design. What was really different about Obama’s organization? We’re used to thinking about organizations in 20th century terms: do we design them to be tall, or flat?

But tall and flat are concepts built for an industrial era. They force us to think – spatially and literally – in two dimensions: tall organizations command unresponsively, and flat organizations respond uncontrollably.

Obama’s organization blew past these orthodoxies: it was able to combine the virtues of both tall and flat organizations. How? By tapping the game-changing power of self-organization. Obama’s organization was less tall or flat than spherical – a tightly controlled core, surrounded by self-organizing cells of volunteers, donors, contributors, and other participants at the fuzzy edges. The result? Obama’s organization was able to reverse tremendous asymmetries in finance, marketing, and distribution – while McCain’s organization was left trapped by a stifling command-and-control paradigm.

Obama’s organization did not match any of the typical business hierarchies (i.e.e silos of command) because it was designed around the shape of human social networks. As Exley writes, it’s motto was “Respect. Empower. Include.”

It used leaders and managers at each point who understood the needs of the organization without having to have constant monitoring by higher ups. The type of leadership Obama displays makes this possible to his followers (I’ll write about this later). Self-organization of this order can only occur with the right style of leadership.

2. Seek elasticity of resilience. Obama’s 21st century organization was built for a 21st century goal – not to maximize outputs, or minimize inputs, but to, as Gary Hamel has discussed, remain resilient to turbulence. What happened when McCain attacked Obama with negative ads in September? Such attacks would have depleted the coffers of a 20th century organization, who would have been forced to retaliate quickly and decisively in kind. Yet, Obama’s organization responded furiously in exactly the opposite way: with record-breaking fundraising. That’s resilience: reflexively bouncing back to an existential threat by growing, augmenting, or strengthening resources.

Responding quickly to change and crisis will be a constant requirement for many organizations in the coming years. Currently, a large number of organizations are brittle, with links of leadership drawn too tightly in non-productive ways. We are watching many of them collapse each day.

A top-down organization often can not respond quickly to threats because of the amount of time it takes information to travel along its length, from the bottom to the top and back again, precludes rapid response. Decision-making is concentrated in a few who have limited time to deal with each one, even if the right information makes its way up the chain of command to them.

In addition, in many brittle corporations, the methods used to control employees’ behavior is restricting, with the attention to process being more important than finding a creative way to succeed. Process is often rewarded while creativity is not. We see this in too many organizations (such as schools) where the exact opposite should be the usual course.

3. Minimize strategy. Obama’s campaign dispensed almost entirely with strategy in its most naïve sense: strategy as gamesmanship or positioning. They didn’t waste resources trying to dominate the news cycle, game the system, strong-arm the party, or out-triangulate competitors’ positions. Rather, Obama’s campaign took a scalpel to strategy – because they realized that strategy, too often, kills a deeply-lived sense of purpose, destroys credibility, and corrupts meaning.

This is a very subtle point. Obviously Obama and his advisor’s had a strategy, but it was not tied to many of the standard tactics we might have been used to. The very manner in which they were organized permitted them to carry out tactics that other groups had previously ignored because the cost to implement was too great.

For example, by making ti so easy for individuals to donate money, Obama was able to generate significant amounts of money yet not have to spend as much of his own time fundraising. Most politicians spend half their day devoted to raising money for their campaign. By being freed from this constraint, Obama was able to spend more time focussed on the campaign, on strategy/tactics and not on fundraising. This was an enormous advantage in the primaries.

His online approaches also helped him identify and maintain individuals who were instrumental in the caucus states, something that most politicians ignored because of the cost and time required. Obama was able to mobilize his small donors and others to push him over the top in these states.

4. Maximize purpose. Change the game? That’s 20th century thinking at its finest – and narrowest. The 21st century is about changing the world. What does “yes we can” really mean? Obama’s goal wasn’t simply to win an election, garner votes, or run a great campaign. It was larger and more urgent: to change the world.

Bigness of purpose is what separates 20th century and 21st century organizations: yesterday, we built huge corporations to do tiny, incremental things – tomorrow, we must build small organizations that can do tremendously massive things.

And to do that, you must strive to change the world radically for the better – and always believe that yes, you can. You must maximize, stretch, and utterly explode your sense of purpose.

Not every organization needs to follow this model. These sorts of transformational organizations, with their decentralized approaches, work best in areas where simple stick/carrot approaches are not needed. If people are going to change the world, they will be motivated without needed other sorts of reinforcement.

Small companies and entrepreneurial organizations may be best suited for this approach. The feeling of working on something big, creating something that never existed before to fight problems that face the whole world can inspire tremendous innovations. Obama was not the first to use this. He was just able to use new tools in an innovative fashion to create something novel, just as many successful entrepreneurs do.

I’ll discuss the last 3 lessons later.

Technorati Tags: , ,

Marketing for research

atomium by txd
Attention, science and money:
[Via business|bytes|genes|molecules]

Interesting observation by Kevin Kelly. He says

Where ever attention flows, money will follow

To some extent, that’s somewhat obvious. Peter Drucker, whom I admire a lot, said the following

Marketing and innovation produce results; all the rest are costs

Part of the problem with many corporations that commercialize science and technology is that they only focus on the marketing and not the innovation. I remember being told by a higher up that marketing made money – For every dollar we spend on Marketing, we get $3 back. But he told me that research cost money, money that was never directly recouped.

There are good metrics for marketing, not so much for innovation. Yet without the latter the former has nothing to do.

Attention can be driven by many mechanisms, marketing being the most effective one. The key is gaining sufficient mindshare, which is often accompanies by a flow of capital. In science, the money follows topics of research that have mindshare. Similarly people fund companies in areas that generate mindshare for whatever reason.

The question I often ask myself, both from my time as a marketer and as someone interested in science communication, is how can we bring more mindshare to some of our efforts and science in general. What does money flow mean? Is it just research funding? Is it investment in such concepts as “bursty work”? Take something else Kelly writes

New things that don’t work or serve no purpose are quickly weeded out of the system. But the fact that something does work or is helpful is no longer sufficient for success.

Part of the problem is that many researchers feel the data should speak for itself. They fail to realize that gaining mindshare or convincing people requires social interactions. It is a very rare thing that requires no further work in order to sell itself.

We all realize that nothing in science is this way. That is, when we deal with each other, we realize that further experimentation is required to convince us of a new innovation. Few things just emerge from Zeus’ head. we know the process to market to our peers – publications, conferences and seminars.

But the idea of doing something similar to get innovations out to non-scientists is not on an researcher’s radar screen. We don’t have enough time for that. Perhaps just a recognition that there is a process people go through to adopt an innovation and the attempt to facilitate some of those steps would go a long way.

I have written about the lack of marketing in science (stealing shamelessly from Larry Page). It’s critical that we do a better job of highlighting the power of our activities and learn some marketing tricks along the way. No I am not talking about the in your face stuff that gives marketing a bad name, but about the kinds of activities that maintain that attention, and get people to notice. The good news, many of us already do that, perhaps without even realizing it. It’s still niche awareness, but I have a feeling that we are close to actually crossing the hump and bringing some of our activities into the mainstream.

KK link via Michael Nielsen

Marketing is really just convincing people to make a change in their life, to adopt an innovation. It may have a bad odor in science (because ads make people want things that they do not really need) but marketing is really what everyone does who truly wants to compete for mindshare.

We just need to do it in a way that supports research while helping others through the process of adopting innovations.

Technorati Tags: ,

Missing the point?

pendulum by sylvar

It has been about a month since Science published
Electronic Publication and the Narrowing of Science and Scholarship by James Evans. I’ve waited some time to comment because the results were somewhat nonintuitive, leading to some deeper thinking.

The results seem to indicate that greater access to online journals results in fewer citations. The reasons for this are causing some discussion. Part of what I wlll maintain is that papers from 15 years ago were loaded with references for two reasons that are no longer relevant today: to demonstrate how hard the author had worked to find relevant information and to help the reader in their searches for information.

Finding information today is too easy for there to be as great a need to include a multitude of similar references.

Many people feel the opposite, that the ease in finding references, via such sites as PubMed, would result in more papers being cited not less. Bench Marks has this to say:

Evans brings up a few possibilities to explain his data. First, that the better search capabilities online have led to a streamlining of the research process, that authors of papers are better able to eliminate unrelated material, that searching online rather than browsing print “facilitates avoidance of older and less relevant literature.” The online environment better enables consensus, “If online researchers can more easily find prevailing opinion, they are more likely to follow it, leading to more citations referencing fewer articles.” The danger here, as Evans points out, is that if consensus is so easily reached and so heavily reinforced, “Findings and ideas that do not become consensus quickly will be forgotten quickly.” And that’s worrisome–we need the outliers, the iconoclasts, those willing to challenge dogma. There’s also a great wealth in the past literature that may end up being ignored, forcing researchers to repeat experiments already done, to reinvent the wheel out of ignorance of papers more than a few years old. I know from experience on the book publishing side of things that getting people to read the classic literature of a field is difficult at best. The keenest scientific minds that I know are all well-versed in the histories of their fields, going back well into the 19th century in some fields. But for most of us, it’s hard to find the time to dig that deeply, and reading a review of a review of a review is easier and more efficient in the moment. But it’s less efficient in the big picture, as not knowing what’s already been proposed and examined can mean years of redundant work.

But this is true of journals stored in library stacks, before online editions. It was such a pain to use Index Medicus or a review article (reading a review article has always been the fastest way to get up to speed. It has nothing to do with being online or not) and find the articles that were really needed. So people would include every damn one they found that was relevant. The time spent finding the reference had to have some payoff.

Also, one would just reuse citations for procedures, adding on to those already used in previous papers. The time spent tracking down those references would be paid out by continuing usage, particularly in the Introduction and Materials & Methods sections. Many times, researchers would have 4 or 5 different articles all saying the similar things or using the same technique just to provide evidence of how hard they had worked to find them (“I had to find these damned articles on PCR generated mutagenesis and I am going to make sure I get maximum usage out of them.”)

There are other possible answers for the data that do not mean that Science and Scholarship are narrowing, at least not in a negative sense. A comment at LISNews leads to one possible reason – an artifact of how the publishing world has changed.
The comment takes us to a commentary of the Evans’ article.While this is behind the subscription wall, there is this relevant paragraph:

One possible explanation for the disparate results in older citations is that Evans’s findings reflect shorter publishing times. “Say I wrote a paper in 2007” that didn’t come out for a year, says Luis Amaral, a physicist working on complex systems at Northwestern University in Evanston, Illinois, whose findings clash with Evans’s. “This paper with a date of 2008 is citing papers from 2005, 2006.” But if the journal publishes the paper the same year it was submitted, 2007, its citations will appear more recent.

[As an aside, when did it become Evans’s rather than Evans’? I’d have gotten points of from my English teacher for that. Yet a premier journal like Science now shows that I can use it that way.]

The commentary also mentions work that appears to lead to different conclusions:

Oddly, “our studies show the opposite,” says Carol Tenopir, an information scientist at the University of Tennessee, Knoxville. She and her statistician colleague Donald King of the University of North Carolina, Chapel Hill, have surveyed thousands of scientists over the years for their scholarly reading habits. They found that scientists are reading older articles and reading more broadly–at least one article a year from 23 different journals, compared with 13 journals in the late 1970s. In legal research, too, “people are going further back,” says Dana Neac u, head of public services at Columbia University’s Law School Library in New York City, who has studied the question.

So scientists are reading more widely and more deeply. They just do not add that reading to their reference lists. Why? Part of it might be human nature. Since it is so much easier to find relevant papers, having a long list no longer demonstrates how hard one worked to find them. Citing 8 articles at a time no longer means much at all.

That is, stating “PCR has been used to create mutations in a gene sequence 23-32” no longer demonstrates the hard work put into gathering those references. It is so easy to find a reference that adding more than a few looks like overkill. That does not mean that the scientists are not reading all those other ones. They still appear to be, and are even reading more, they just may be including only the relevant ones in their citations.

Two others put the data into a different perspective. Bill Hooker at Open Reading Frame did more than most of us. He actually went exploring in the paper itself and added his own commentary. Let’s look at his response to examining older articles:

The first is that citing more and older references is somehow better — that bit about “anchor[ing] findings deeply intro past and present scholarship”. I don’t buy it. Anyone who wants to read deeply into the past of a field can follow the citation trail back from more recent references, and there’s no point cluttering up every paper with every single reference back to Aristotle. As you go further back there are more errors, mistaken models, lack of information, technical difficulties overcome in later work, and so on — and that’s how it’s supposed to work. I’m not saying that it’s not worth reading way back in the archives, or that you don’t sometimes find overlooked ideas or observations there, but I am saying that it’s not something you want to spend most of your time doing.

It is much harder work to determine how relevant a random 10 year old paper is than one published last month. In the vast majority of cases, particularly in a rapidly advancing field (say neuroscience) papers that old will be chock full of errors based on inadequate knowledge. This would diminish their usefulness as a reference. In general, new papers will be better to use. I would be curious for someone to examine reference patterns in papers published 15 years ago to see how many of the multitude of citations are actually relevant or even correct?

Finally, one reason to include a lot of references is to help your readers find the needed information without having to do the painful work of digging it out themselves. This is the main reason to include lots of citations.

When I started in research, a good review article was extremely valuable. I could use it to dig out the articles I needed. I loved papers with lots of references, since it made my life easier. This benefit is no longer quite as needed because other approaches are now available to find relevant papers in a much more rapid fashion than just a few years ago.

Bill discusses this, demonstrating that since it is so much easier to find relevant article today, this need to help the reader in THEIR searches is greatly diminshed.

OK, suppose you do show that — it’s only a bad thing if you assume that the authors who are citing fewer and more recent articles are somehow ignorant of the earlier work. They’re not: as I said, later work builds on earlier. Evans makes no attempt to demonstrate that there is a break in the citation trail — that these authors who are citing fewer and more recent articles are in any way missing something relevant. Rather, I’d say they’re simply citing what they need to get their point across, and leaving readers who want to cast a wider net to do that for themselves (which, of course, they can do much more rapidly and thoroughly now that they can do it online).

Finally, he really examines the data to see if they actually show what many other reports have encapsulated. What he finds is that the online access is not really equal. Much of it is still commercial and requires payment. He has this to say when examining the difference between commercial online content and Open Access (my emphasis):

What this suggests to me is that the driving force in Evans’ suggested “narrow[ing of] the range of findings and ideas built upon” is not online access per se but in fact commercial access, with its attendant question of who can afford to read what. Evans’ own data indicate that if the online access in question is free of charge, the apparent narrowing effect is significantly reduced or even reversed. Moreover, the commercially available corpus is and has always been much larger than the freely available body of knowledge (for instance, DOAJ currently lists around 3500 journals, approximately 10-15% of the total number of scholarly journals). This indicates that if all of the online access that went into Evans’ model had been free all along, the anti-narrowing effect of Open Access would be considerably amplified.

[See he uses the possessive of Evans the way I was taught. I wish that they would tell me when grammar rules change so I could keep up.]

It will take a lot more work to see if there really is a significant difference in the patterns between Open Access publications and commercial ones. But this give and take that Bill utilizes is exactly how Science progresses. Some data is presented, with a hypothesis. Others critique the hypothesis and do further experiments to determine which is correct. The conclusions from Evans’ paper are still too tentative, in my opinion, and Bill’s criticisms provide ample fodder for further examinations.

Finally, Deepak Singh at BBGM provides an interesting perspective. He gets into one of the main points that I think is rapidly changing much of how we do research. Finding information is so easy today that one can rapidly gather links. This means that even interested amateurs can find information they need, something that was almost impossible before the Web.

The authors fail to realize that for the majority of us, the non-specialists, the web is a treasure trove of knowledge that most either did not have access to before, or had to do too much work to get. Any knowledge that they have is better than what they would have had in the absence of all this information at our fingertips. Could the tools they have to become more efficient and deal with this information glut be improved? Of course, and so will our habits evolve as we learn to deal with information overload.

He further discusses the effects on himself and other researchers:

So what about those who make information their life. Creating it, parsing it, trying to glean additional information to it. As one of those, and having met and known many others, all I can say is that to say that the internet and all this information has made us shallower in our searching is completely off the mark. It’s easy enough to go from A –> B, but the fun part is going from A –> B –> C –> D or even A –> B –> C –> H, which is the fun part of online discovery. I would argue that in looking for citations we can now find citations of increased relevance, rather than rehashing ones that others do, and that’s only part of the story. We have the ability to discovery links through our online networks. It’s up to the user tho bring some diversity into those networks, and I would wager most of us do that.

So, even if there is something ‘bad’ about scientists having a more shallow set of citations in their publications, this is outweighed by the huge positive seen in easy access for non-scientists. They can now find information that used to be so hard to find that only experts ever read them. The citation list may be shorter but the diversity of the readers could be substantially enlarged.

Finally, Philip Davis at The Scholarly Kitchen may provide the best perspective. He also demonstrates how the Web can obliterate previous routes to disseminate information. After all the to-do about not going far enough back into the past for references, Philip provides not only a link (lets call it a citation) from a 1965 paper by Derek Price but also provides a quote:

I am tempted to conclude that a very large fraction of the alleged 35,000 journals now current must be reckoned as merely a distant background noise, and as far from central or strategic in any of the knitted strips from which the cloth of science is woven.

So even forty years ago it was recognized that most publications were just background noise. But, what Philip does next is very subtle, since he does not mention it. Follow his link to Price’s paper (which is available on the Web, entitled Networks of Scientific Papers). You can see the references Price had in his paper. a total of 11. But you can also see what papers have used Price’s paper as a reference. You can see that quite a few recent papers have used this forty year old paper as a reference. Seems like some people maintain quite a bit of depth in their citations!

And now, thanks to Philip, I will read an interesting paper I would never have read before. So perhaps there will be new avenues to find relevant papers that does not rely on following a reference list back in time. The Web provides new routes that short circuits this but are not seen if people only follow databases of article references.

In conclusion, the apparent shallownesss may only be an artifact of publishing changes, it may reflect a change in the needs of the authors and their readers, it may not correctly factor in differences in online publishing methods, it could be irrelevant and/or it could be flat out wrong. But it is certainly an important work because it will drive further investigations to tease out just what is going on.

It already has, just by following online conversations about it. And to think that these conversations would not have been accessible to many just 5 years ago. The openness displayed here is another of the tremendous advances of online publication.

Technorati Tags: , , ,

Cuil is middling

Cuil Misses Me:
[Via chrisbrogan.com]

cuil search engine
I just tried out Cuil, which is supposed to be amazing and better search engine, and what not (that’s what they told Mike Arrington). But it didn’t work for me.

I searched on “Chris Brogan” and found all kinds of relevant info, including random pictures not related to the text results beside the search, and none of them my main URL.

I searched on “chrisbrogan.com” and it couldn’t find my URL.

I searched on “chrisbrogan” and it found a bunch of social networks where I’ve used that username.

Call me egotistical, but if you can’t find yourself in a search engine after a decade of littering the web with your presence, I’m thinking it’s not much of a search engine.

I had a similar experience. Not only were the pages that came up when I used my name out of date but many were also not accessible anymore. Why does it provide links to pages that are no longer on the Web? The first relevant link was not very easy to find.

In addition, when I tried several terms that I knew went to very good Wikipedia pages, the top link was to the discussion page on Wikipedia, not the relevant page itself. search is all about finding the relevant, useful page fastest. cuil in this outing fails at that for me.

cuil still has a long way to go to overcome Google. But since its launch was to big, I am going to take it really slow, with a lot of convincing from others, before I try it again.

Others have discussed how it was able to get hyped so much on day one or how it just does not do what they need a search engine to do.

It might have been better to have a smaller launch and work out the bugs first.

Good Work


About five years ago, I read a very interesting book called
Good Work. It is very dense with a lot of information but it was very clear in its premise. We are happiest when the work we do aligns with our personal ethics.

in this book, the authors examined two groups of people who chose a vocation because they wanted to help others and to change the world: geneticists and journalists.

Many people entered each field for noble reasons. But only the scientists were happy with their choice, while many of the reporters were not. The book indicated that this was because the needs of the industry they chose did not match their personal viewpoints.

The geneticists were pretty happy with their jobs. Journalists were not.

The book indicated that newspapers exist to sell advertising. Advertising is what pays the bills. But this is often at odds with the reasons many journalists enter that profession.

Many reporters who really wanted to provide vital information were often held back while those who helped sell newspapers were rewarded.

Well, it looks like Web 2.0 approaches are providing an outlet for journalists to earn a living while staying close to their own personal ethics – by going directly to the community. Spot.us is devoted to providing journalists an alternate way to get paid.

And here is a recent post describing some of the things a journalist would have to do to get community funding for reporting.

Ten Tips For Journalists to Fundraise Money:
[Via Spot.Us – Community Funded Reporting]

I’ll admit it, sometimes when nobody is looking I’ll watch a late night infomercial. These people are fascinating to watch. They are master salespeople.

I realize the idea of a journalist fundraising money for their work is new. Normally it’s a duty we’d hand off to the advertising/marketing people and stick to creating content. But “the times they are-a changing” and so is the job description. There is a reason why freelance journalists have to write a “pitch.” They are selling their services. Normally we sell to high-end repeat customers (editors) because they have a freelance budget. But Spot.Us believes that journalists should pitch the public and that if members of the public band together they too can have a freelance budget.

Rather than treat journalists fundraising as taboo, we should have a healthy discussion about the right and wrong approach. I don’t claim to know the answers, so your comments are valued.

This is something that will be worked out as they go along. I don’t think it is going to replace the mainstream media. But it is a novel approach and will heavily use Web 2.0 approaches for success.

What are the best practices? Are they different for text and video? How can journalists best explain the value of their services? I don’t claim to know the answers to these questions (so your comments are highly valued), but I do think these questions need to be tackled. If journalists are going to become more independent, they need to learn how to re-master the art of the pitch.

The list below is my own and I think will evolve over time.

10 Things to Keep in Mind To Get Community Funded Reporting
[More]

The list is actually a nice one for anyone working to earn a living by using Web 2.0 conversations. Creating a pitch. Finding the community.

The one thing the web can not duplicate is a human being. Going directly to their audience is a way that journalists can earn a living that more closely aligns with their ethics, just like recording artists and filmmakers are doing.

Technorati Tags: ,

Remembering is not enough

teacher
by foundphotoslj
Why is genetics so difficult for students to learn?:
[Via Gobbledygook]

This Sunday morning at the International Congress of Genetics, Tony Griffiths gave an interesting presentation with the above title. He identified 12 possible reasons why students have problems learning genetics. His main argument: students should learn concepts and principles and apply them creatively in novel situations (the research mode). Instead, too many details are often crammed into seminars and textbooks. In other words, students often stay at the lowest level of Bloom’s taxonomy, the remembering of knowledge. The highest level, the creation of new knowledge, is seldom reached, although these skills are of course critical for a successful researcher.

Andrew Moore from EMBO talked about the teaching of genetics in the classroom. He was concerned that a survey found that molecular evolution (or molecular phylogeny) was taught in not more than 30% of European classrooms. He gave some examples of how principles of genetics can be integrated into high school teaching.

Wolfgang Nellen explained his successful Science Bridge project of teaching genetics in the classroom, using biology students as teachers. Interestingly, they have not only taught high school students, but also journalists and – priests (German language link here). Politicians were the only group of people that weren’t interested in his offer of a basic science course.

Teaching is a very specific mode of transferring information, one that has its own paths. It is an attempt to diffuse a lot of information throughout an ad hoc community.

But it is often decoupled from any social networking, usually having just an authority figure disperse data, with little in the way of conversations. There is little analysis and even less synthesis, just Remembering what is required for the next test.

Bloom’s taxonomy is a nice measure of an individual’s progress through learning but it is orthogonal to the learning a community undergoes. Most instruction today is geared towards making the individual attain the highest part of the pyramid.

How does this model change in a world where social networking skills may be more important? What happens to Remembering when Google exists? When information can be so easily retrieved, grading for Remembering seems foolish.

The methods we use to teach at most centers of higher education are, at heart, based on models first developed over a century ago. It may be that they will have to be greatly altered before some of the real potential of online social networks will occur.

Technorati Tags: , ,

Adobe helps

adobe by annais
To Serif or Not To Serif? Regarding Online Readability:
[Via The Acrobat.com Blog]

There are myriad different opinions on what the best conditions are for reading text on a screen. Debates rage about whether or not to use serif fonts and how long a line of text should be. A surprisingly sensitive issue, and possibly without a clear resolution.

Here we’ve tried to delineate a few of the more widely accepted tips on how to optimize readability. Although they can be forsaken in the name of personal style, they’re generally considered the most conducive to easy reading. Here are a few key points plucked from various takes on the subject:

Regardless of medium, high contrast between type color and page color always contributes to optimal reading conditions. Not surprisingly, readers show a strong preference for black text on a white background (though it’s not strictly necessary; if you simply loathe the combination of white and black, any reasonably contrasting color duo will do). When in doubt, check your color scheme on Snook’s Color Contrast Check.
[More]

The Web is a different medium than paper or slides. While some things, such as contrast, remain the same, presenting text on the Web has different needs.

Its lower resolution, which allows pages to download faster, makes text harder to read. So size and font choice is important.

Remember that the user often has the ability to override the font choices that are made on the page, either by increasing the size or changing the font. So the choice of font is not as important as its presentation and of course its content.

Line spacing is another important aspect to understand. People read online from farther distances than they read a book. Poor text choices, in size, color or contrast, can make it very difficult for the content of the page to be assimilated.

Technorati Tags:

Norms are changing

columns by TankGirlJones
Column on NIH and Harvard policies:
[Via Open Access News]
Karla Hahn, Two new policies widen the path to balanced copyright management: Developments on author rights, C&RL News, July/August 2008.

A light bulb is going off that is casting the issue of author rights management into new relief. On January 11, 2008, the National Institutes of Health (NIH) announced a revision of its Public Access Policy. Effective April 7, 2008, the agency requires investigators to deposit their articles stemming from NIH funding in the NIH online archive, PubMed Central. Librarians have been looking forward to such an announcement, especially since studies found that the voluntary version of the policy was achieving deposit rates of affected articles on the order of a few percentage points.

Since we as taxpayers pay for this research, it should not be bound up behind access control. Now, because of the NIH’s revision, it won’t.

With the article deposit requirement, researchers can no longer simply sign publication agreements without careful review and, in some cases, modification of the publisher’s proposed terms. While this may be perceived as a minor annoyance, it calls attention to the value of scholarly publications and the necessity to consider carefully whether an appropriate balance between author and publisher rights and needs is on offer.

The norm in science has been to always quickly sign over copyright so that the paper could be published. This sometimes resulted in the absurd prospect that the author of a paper could not use his own data in slides, since he no more owned the copyright of it than any other random scientist. Now there is a little leverage for the author to retain some aspects of copyright.

As institutions, as grantees, become responsible for ensuring that funded authors retain the rights they need to meet the NIH public Access Policy requirements, there is a new incentive for campus leaders to reconsider institutional policies and local practices relating to faculty copyrights as assets. …
The February 2008 vote by the Harvard Faculty of Arts and Sciences to grant Harvard a limited license to make certain uses of their journal articles is another important indicator of an accelerating shift in attitudes about author rights management, and also reveals the value of taking an institutional approach to the issue. …

Academic pressure is coming to bear on these policies and it will be interesting to see how it all plays out. In most instances, providing open access will be the better route but now the individual institutions will be responsible for providing the necessary infrastructure.

Perhaps something like Highwire Press will appear. Here , instead of each scientific association having to develop their own infrastructure, Highwire does it for many of them, greatly simplifying publishing for all. Highwire now has almost 2 million article published with free access. Perhaps something similar for institutional storage would be helpful.

Norms are always more difficult to change than technologies. We are now witnessing a key shift in norms for sharing scholarly work that promises a giant step forward in leveraging the potential of network technologies and digital scholarship to advance research, teaching, policy development, professional practice, and technology transfer. …

What scientists expect when they publish a paper is changing rapidly. What once took 6-9 months from submission to publication can now happen in weeks. Where once all rights had to be assigned to the publisher, now the authors can retain some for their own use.

What will the norms be like in five years?

Technorati Tags: , ,