Category Archives: Web 2.0

Comments are a conversation

conversation by cliff1066

Dan Schwabel’s 5 Free Tools For Reputation Management introduced me to a new listening tool, backtype. It solves the problem of monitoring blog
comments where people specifically mention you. People can make comments about you on other blogs and if you only track links from blog posts, you won’t see it. BackType lets you find, follow and share comments from across the
web. I gave it whirl and it turned up some interesting results.

You can also track other bloggers and see where they commented — I might do this only to study how the masters do it. An old trick is to observe people who do social media really well and learn from observation. It’s interesting to observe Chris Brogan’s commenting activity.

Update: Based on a comment to this post, I’m adding some context to comment trackers.

These services let you track conversations that are important to your organization and issue. They also allow content creators to aggregate their online activity and expertise from across the social Web into one centralized, portable profile.

Questions To Ask Before You Dive In:

What do you need track?
How will you respond to negative
comments?
Will you respond to all comments?
How to prioritize?
Which tool is right for you?

Why Commenting and Comment Tracking Is Important

Commenting is the life blood of blogging and key to building a community
They’re a way to get more minds into the story.
They’re a way to annotate someone’s thoughts such that the ideas can take on another dimension.
They’re a way to establish authority in your content niche

[More]

Blogs are useful by themselves to the individual blogger. But, when comments are added, it allows others to become part of the conversation and to leave connections into their own networks. Comments enhance the power of blogs.

However, keeping track of comments at series of blogs can be difficult. Comments are not usually included in the RSS newsfeeds for the sites. The tools described here help provide a solution. This now makes it easier to follow conversations that are important to you, even if they re not directly on a blog.

Enhancing conversations is what Web 2.0 is all about.

Technorati Tags: ,

A wakeup lawsuit

tracks by *Micky
Zotero facing a lawsuit:
[Via Bench Marks]

I’ve written about Zotero before, it’s an intriguing tool, essentially a Firefox plug-in for managing your reference list and other pieces of information. It’s a bit of a hybrid between online management tools like Connotea and things like Papers which you store on your own computer.

The bad news is that Thomson Reuters, the manufacturers of EndNote, are suing George Mason University and the Commonwealth of Virginia because a new version of Zotero lets you take your EndNote reference lists and convert them for use in Zotero. Yes, this is the same Thomson of Thomson ISI, secret gatekeepers of journal impact factors. They really seem to be going out of their way to lose what little goodwill they have left with the scientific community. It will be interesting to see if this reverse engineering for interoperability holds up in court as something that should be prevented.

This is sadly typical. I loved EndNote back in the 90s because it was a great Mac product. Much better for my needs than its competition, Reference Manager, which was much more of a Windows product. Niles Software really listened to what people wanted and added some very useful features, such as linkage of the library to a Word document. Then you could put the citation directly into Word.

I convinced others at my company to buy it. I had searches for a wide variety of topics. The purchase of Niles Software by ISI (now part of Thomson) started a period of fitful Mac updates and costly upgrades. I have since moved to other applications (most recently Sente) that did what I wanted for a more reasonable price.

This lawsuit seems like a losing gambit to me since any user can convert their library to Endnote XML that any other application can read. All it will do is drive users away from their software as the customers find new uses for the data.

Because the database really belongs to the user not to Thomson. But they try to obscure that by using a proprietary format. This hurts the enduser. Say I have an 10 year old EndNote library I forgot to convert. With say 8000 entries. And an old version of EndNote that no longer works in OS X. How am I supposed to move it to what I currently use without having to purchase EndNote simply for this one use? My favorite example of this horrendous process is the Mac cookbook program Mangia!

In the early 90s, this was the best program of its type, bar none. It permitted one to have a huge recipe library, that could be easily displayed, searched and also permitted an easy grocery list to be printed. Many of us love it but it does not work in OS X.

Mangia is no longer produced by anyone. The database created was proprietary and undocumented. The program had no export feature. Now it no longer even runs on any computer. So every user now has a database that they created that is unusable. There are workarounds to try and get to the data but they are not satisfactory. They also require the user to be able to run Mangia, which is really impossible for virtually everyone using a Mac today (I think my mother kept an old Mac around just so she could still use this one program. She has a huge library of recipes).

So all I have on my computer is a dead database of Mangia recipes that can never be used again. All that work over years to create a database and it is useless. This is why people need to be careful when they chose a database application.

Companies that respond to enduser innovation by suing, rather than innovating, are not ones that I see being very successful in the long run. There are other programs that can do the same thing. They are often created by companies that are more responsive and user friendly than larger companies. Suing users just drives people to more open formats.

More from Bench Marks:

More importantly, it’s yet again, a lesson in tying yourself to one locked-down proprietary format for your data and your work tools. If you’ve put a huge amount of time and effort into maintaining your EndNote list and a better tool comes along and becomes the standard, all that work may go to waste and you’ll have to start over again. A similar lesson was learned last week from anyone who purchased music downloads from WalMart. Richard Stallman recently gave a warning along the same lines about the much-hyped concept of “cloud computing”.
As you experiment with new online tools for your research, heed these lessons well. Demand tools that support open standards and open formats, tools where if you put in an effort (and most of these tools demand a lot of effort), you can get that work out again so you don’t have to repeat it for the next tool you try. Further discussion here and here.

This gets at the same topic. Who owns the data? There are some very important and useful aspects to having data in the cloud. It makes it very easy for people to access their data from everywhere. Small groups can have a slew of Web 2.0 applications up and running for their group with little cost for maintenance or upkeep. This has some very real benefits.

But it must be balanced against the possibility that you no longer control the data. Your work is on servers belonging to someone else. They can change ownership and all of a sudden the cloud is not so free. To me, cloud computing is great for things that need rapid prototyping, easy access and are, at heart, ephemeral.

There are many types of data. Some of it is short term. It used to only be found on yellow sheets of paper or perhaps the multiple drafts of a paper. These data fit quite well in the cloud. I have an email address in the cloud that I only use for online purchases. Anything going there is a result of those purchases and does not clog up my real email.

But it is foolhardy for any organization to put the guts of its data anywhere that it has does not have absolute control over. These are things that losing access to would have severe ramifications for the business.

So, echoing David, stay away from anything that ties you into a specific, closed format. It can come back to bite you big time.

Technorati Tags: ,

A generational war

 1288 1215052987 96482B2F3B by kentbye
Social Media vs. Knowledge Management: A Generational War:
[Via Enterprise 2.0 Blog]

You’d think Knowledge Management (KM), that venerable IT-based social engineering discipline which came up with evocative phrases like “community of practice,” “expertise locater,” and “knowledge capture,” would be in the vanguard of the 2.0 revolution. You’d be wrong. Inside organizations and at industry fora today, every other conversation around social media (SM) and Enterprise 2.0 seems to turn into a thinly-veiled skirmish within an industry-wide KM-SM shadow war. I suppose I must be a little dense, because it took not one, not two, but three separate incidents before I realized there was a war on. Here’s what’s going on: KM and SM look very similar on the surface, but are actually radically different at multiple levels, both cultural and technical, and are locked in an undeclared cultural war for the soul of Enterprise 2.0. And the most hilarious part is that most of the combatants don’t even realize they are in a war. They think they are loosely-aligned and working towards the same ends, with some minor differences of emphasis. So let me tell you about this war and how it is shaping up. Hint: I have credible neutral “war correspondent” status because I was born in 1974.

[More]

A very clear post that describes the conflict between Boomer and Millennial thinking when it comes to dealing with large amounts of data. Knowledge management (Boomer) is a top-down put the data in the proper bin sort of approach. There are names for each bin and everything needs to fit in the correct one.

Social media (Millennial) uses human social networks in a bottom-up approach that allows the data to determine where it should go. Any bin that it should go into is an emergent property of the network created by the community.

Read the whole post for a nice dissection of what is happening in this War. Just remember that Age is not as important as attitude. There are Boomers who get social media and Millennials who do not.

I think it is that one personality wants things to be black and white (the data is in a database on THIS computer) white the other deals great with shades of gray (the data is in the cloud and not really anyplace).

I did my post-doc in a chemistry lab, the only biologist. I saw something very valuable. Chemistry is very process-driven. The purpose of a process is to reproduce success. If a process, say a particular chemical synthesis, did not work, as in the yield was 10% instead of 90%, it was not the fault of the process. The reagents were bad or the investigator was incompetent. But the process was still valid.

So chemistry selected for people who were very process-driven, wanted things very tightly controlled and well defined.

Biology has a very different regard for process. The same process (say the cloning of a gene) can be done on two different days and get different results (10 colonies of cells one day; 500 the next). Biology is really too complex to be able to control everything. A lot of things can go wrong and it can be really easy to fool oneself with results.

So biology, particularly at the cutting edge, selects for people who can filter out extraneous bits of data, can be comfortable with conditional results and with the general anarchy that can occur. Every molecular biologist has experienced the dreaded ‘everything stops working, so I have to remake every buffer, order new reagents and spend a month trying to figure out what happened, knowing that things will start working again for no real reason.’

Chemists in my post-doc lab hated biology because of the large variance in results, compared to chemistry. Biologists are often happy to be within an order of magnitude of expected results

One way of thinking has to know whether Schrodinger’s cat is dead or alive, while the other is comfortable with knowing it is simultaneously dead and alive.

Biology needs the Millenial approach because it is creating data at too fast a pace to put it all into bins. Social networks can help tremendously with the filters needed to find knowledge in the huge amount of data.

Technorati Tags: ,

More information

pills by blmurch
Magical Thinking:
[Via FasterCures]
Margaret Anderson, COO, FasterCures

I appreciated the message of Carol Diamond and Clay Shirky’s recent piece in the August 2008 Health Affairs titled “Health Information Technology: A Few Years of Magical Thinking?” In it they say that “proponents of health IT must resist “magical thinking,” such as the notion that isolated work on technology will transform our broken system.” It’s interesting to think about systems change at the front end, and how easy it is to get stars in our eyes about how things like health IT or personalized medicine will transform the world as we know it, and how all of our problems will then magically go away.

The article discusses how it might be easier to implement IT in health if the whole system is redone, rather than bolting on IT. IT will not fix the problems without key changes in how medicine is practiced.

A press release discusses some of their points.

Diamond and Shirky propose an alternative route to using health IT to help transform the U.S. health system. “This alternative approach would focus on a minimal set of standards at first,” they say, and would make utility for the user and improved health outcomes, rather than vendor agreement, the key criteria.

Diamond and Shirky’s alternative approach “would mean working simultaneously on removing other obstacles while concentrating on those standards necessary for sharing the information, however formatted in the short term, to flow between willing and authorized participants. Finally, it would require clear policy statements that will guide the design of technology.”

Sounds like a bottom up approach with the end user driving the technology, rather than health vendors. More from Margaret Anderson:

Cell phones, email, and the Internet have certainly transformed things in ways we couldn’t have imagined, but they’ve introduced problems we couldn’t have imagined. Technologies such as FAX machines have been leapfrogged over. Problems such as the overabundance of information, and the speed of information flow are here to stay it seems. In the case of health IT, FasterCures sees it as a vital bridge to the future of more rapid information collection, characterization, and analysis which could speed our time to cures.

But there needs to be careful attention to the fact that too much information, particularly in the health field, can make it much harder to make accurate decision. eventually we will get the complexity of the system under control but in the meantime, there will be some problems. Faster Cures is examining them.

We are working on a white paper for the U.S. Department of Health and Human Services about educating and building awareness among consumers about personalized healthcare. This is another area where we must resist “magical thinking” and get down to brass tacks. Too often, the discussion about personalized medicine has been at a 30,000 foot level. For this paper, we’ve talked to many patient advocacy and disease research groups and everyone holds their breath about the potential power that these technologies may hold for their disease areas. They all want more targeted therapies with fewer side effects, which is ultimately the promise of personalized medicine. But they also recognize its complexities. It needs to take into account the world of co-morbidities we all live in; even if baby boomers are out running marathons and eating their greens and blueberries, the reality is that many of us are living with many conditions and diseases, not just one. It will probably raise costs before it can lower them. It’s unlikely many diseases will yield to the relatively easy HER2-Herceptin gene-to-drug relationship. Patients are likely to get much more information about their genetic makeup than they can act on in the near-term.

Health care is still too complex in most cases. The real magical thinking comes in the form of so many fraudulent ‘cures’ that have plagued mankind for thousands of years. Perhaps as we really get IT involved in health, we can begin to gain a fuller understanding of what causes disease and how to attempt a cure.

Technorati Tags: , ,

Marketing for research

atomium by txd
Attention, science and money:
[Via business|bytes|genes|molecules]

Interesting observation by Kevin Kelly. He says

Where ever attention flows, money will follow

To some extent, that’s somewhat obvious. Peter Drucker, whom I admire a lot, said the following

Marketing and innovation produce results; all the rest are costs

Part of the problem with many corporations that commercialize science and technology is that they only focus on the marketing and not the innovation. I remember being told by a higher up that marketing made money – For every dollar we spend on Marketing, we get $3 back. But he told me that research cost money, money that was never directly recouped.

There are good metrics for marketing, not so much for innovation. Yet without the latter the former has nothing to do.

Attention can be driven by many mechanisms, marketing being the most effective one. The key is gaining sufficient mindshare, which is often accompanies by a flow of capital. In science, the money follows topics of research that have mindshare. Similarly people fund companies in areas that generate mindshare for whatever reason.

The question I often ask myself, both from my time as a marketer and as someone interested in science communication, is how can we bring more mindshare to some of our efforts and science in general. What does money flow mean? Is it just research funding? Is it investment in such concepts as “bursty work”? Take something else Kelly writes

New things that don’t work or serve no purpose are quickly weeded out of the system. But the fact that something does work or is helpful is no longer sufficient for success.

Part of the problem is that many researchers feel the data should speak for itself. They fail to realize that gaining mindshare or convincing people requires social interactions. It is a very rare thing that requires no further work in order to sell itself.

We all realize that nothing in science is this way. That is, when we deal with each other, we realize that further experimentation is required to convince us of a new innovation. Few things just emerge from Zeus’ head. we know the process to market to our peers – publications, conferences and seminars.

But the idea of doing something similar to get innovations out to non-scientists is not on an researcher’s radar screen. We don’t have enough time for that. Perhaps just a recognition that there is a process people go through to adopt an innovation and the attempt to facilitate some of those steps would go a long way.

I have written about the lack of marketing in science (stealing shamelessly from Larry Page). It’s critical that we do a better job of highlighting the power of our activities and learn some marketing tricks along the way. No I am not talking about the in your face stuff that gives marketing a bad name, but about the kinds of activities that maintain that attention, and get people to notice. The good news, many of us already do that, perhaps without even realizing it. It’s still niche awareness, but I have a feeling that we are close to actually crossing the hump and bringing some of our activities into the mainstream.

KK link via Michael Nielsen

Marketing is really just convincing people to make a change in their life, to adopt an innovation. It may have a bad odor in science (because ads make people want things that they do not really need) but marketing is really what everyone does who truly wants to compete for mindshare.

We just need to do it in a way that supports research while helping others through the process of adopting innovations.

Technorati Tags: ,

Browsing for researchers

I use a RSS reader and read feeds because it is part of my writing process. Lately, my RSS reading habits have changed. I haven’t given up on it completely, but my process has changed. My feeds are organized into folders and the folders ordered by priority. Like a farmer tending his crops, I’d scan through each folder, each feed, bookmarking and annotating what caught my eye, and looking for patterns and connections. This scan, capture, analyze patterns, and write a blog post is a part of my routine.

It still is, but I now use other methods for scanning. It’s more like hanging out in a village square or a pub — conversations, news, and resources come to me. I’m finding new links and posts either through twitter, comments on my blog post, or through people who have linked to me.

So, it’s like I have a left brain, orderly, linear way to scan and a right brain, wildly creative way to scan.

RSS and newsreaders present an incredible set of tools to filter through a lot of information very rapidly. It is like you are directly hooked into to a diverse group of communities in real time. You can see how different items spread through a linked community and drive communication.

And the orderly vs crazy approaches to connecting help one’s own creativity and innovation by interacting with our tacit information, producing the opportunity to alert other communities.

I like how Chris Brogan describes his reading goals.

1. Reading what friends write.
2. Reading about the “new marketing” industry and the tech industry (fishbowl).
3. Reading what people recommend.
4. Reading off the wall stuff that inspires new thoughts (outside the bowl).

This sounds very much like an early adopter, who has connection outside to other media outlets, but uses trusted insiders to decide what things to use.

Michele Martin wrote a post summarizing a paper titled How Knowledge Workers Use the Web and pulls out some the classifications referenced in the paper. My RSS reading is mostly information gathering or browsing.

Finding–Looking for something specific, such as an answer to a specific question.
Information gathering–Less specific than finding, this is research that’s focused on a particular goal that’s broader-based than simply getting a specific piece of information.
Browsing–Visiting personal or professional sites with no specific goal in mind other than to “stay up-to-date” or be entertained.
Transacting–Using the web to execute a transaction, such as banking or shopping.
Communicating–Participating in chat rooms or forums (remember–this was done in 2002, prior to Facebook and the explosive growth of blogs, etc.)
Housekeeping–Using the web to check or maintain the accuracy and functionality of web-based resources, such as looking for dead links, cleaning up outdated information, etc.

One of the major aspects of scientific research and innovation comes from browsing, from reading about something not directly related to a specific problem but which may provide valuable insight for the problem. This used to be relatively easy by doing things like sitting in the library once a week going through the table of contents of all the journals that came in that week, carefully writing down the bibliographic information on note cards, so they could be examined later at leisure.

Serendipity could raise its head. But the Internet made searching so much easier. So too many scientists spend their time on the first step, finding. This is, of course, very important but you will really only find what you are looking for. Serendipity is reduced.

A personal example. Many years ago, I was working on inducing protein production in E. coli from specific gene segments. We typically did this by shifting the temperature, which resulted in the inactivation of a repressor and the expression of the gene.

However, for large scale production (think 1000s of liters) this was not a tenable solution. It was really impossible to raise the temperature of the vessel quick enough to make it a viable solution.

I happened to be reading the Table of Contents of the Journal of Bacteriology and saw a paper which discussed some of the biological effects on the bacteria when the pH of the media was shifted to a more acidic condition. I recognized some of the bacterial proteins involved as being similar to the repressor we used.

So I went out and did some experiments and determined that by dropping the pH, large amounts of the specific protein could be produced. Dropping some acid in a large vessel and stirring quickly can rapidly expose all the cells to the same conditions and induce protein production.

But it could also be done under some different conditions, resulting in up to 15 times more recombinant protein being produced.

So, for me, the really important aspect of RSS/newsreaders is bringing browsing back. Every journal has newsfeeds now. I can typically go through several thousand titles in an hour, bookmark the ones I want to examine later and even post the links to a blog, where I can add comments.

My blog becomes my online note card file for interesting articles.

Technorati Tags: ,

As always

cats by tanakawho
Digital intimacy:
[Via Bench Marks]

Recently, the NY Times had an article discussing the concept of “ambient awareness”, or as the article puts it, “incessant online contact”. Now, first off, I have to admit that I’m one of the over-30-year-olds the article mentions, who finds the concept of subjecting others to (and being subjected to) a stream of trivial details about one’s day completely unappealing. The proponents of Twitter and FriendFeed and the like feel that they’re getting a more intimate understanding of people, “something raw about my friends,” as one user puts it. I’m more in line with the critics quoted in the article that the end result is more “parasocial” than social, and that it ends up an extension of reading gossip magazines and following celebrities from afar.
So how do these new practices apply to the world of science research?
[More]

David always brings up really good points to discuss. I don’t expect every scientist will want or need to be a direct part of the ‘conversation’ happening on Twiiter or FriendFeed. Few have the time. But it will be important that the social network (ie. lab, department, etc.) they belong to includes people who are connected.

These tools are rapidly becoming a part of how human communities disperse information. This decreases the diameter of a social network tremendously, meaning information of every type has to traverse fewer nodes.

Research networks that normally involved publications, seminars, conferences, etc. will also include these social media approaches. Because labs that remain unconnected will not be able to compete with labs that do use these tools to decrease the diameter of their sphere of collaborations and fid out about relevant information faster.

These tools are just part of finding out what is happening in relevant fields. I’ll give an example of how these tools can help move information in ways not possible before.

I had looked a little bit at FriendFeed but just did not have the time to really dig. Then I noticed that there were a lot of hits at my website that were being referred from the Science 2.0 room.

Turns out they were having a conversation about my site and were asking a lot of questions, trying to get an idea of who I was , my reputation, etc. Seeing the conversation, I quickly joined and helped answer questions. Now I am a part of a group I can check in on every so often that does a great job finding and providing information I find useful.

Like any social setting, I introduced myself, answered some questions and provided insight. Now I am connected to a group that provides very useful information for me.
I don’t have to check it constantly to be able to see useful items that I would not have if I were not part of this particular conversation.

Human social networks are exceptionally great filters of information. The huge amounts of information being created today require human networks to help filter and disperse the info. These tools are simply one part.

All that will really be necessary is for a scientist just to be part of a research network, even just a lab, in which someone is connected to these online sites. What is important is the rate at which this information diffuses throughout the group, not that everyone in the group is connected to Twitter.

Each person in a network often has their own role, their own diverse viewpoint that helps the group. The best tools will be ones that allow people to use them for their own purposes and needs. They do not work by forcing everyone to join.

But they do work by spreading information farther and faster.

Technorati Tags: ,

Making Friends

friends by jurvetson

I wanted to bring my personal perspective of
the 5 steps people go through while adopting a new technology. It has to do with FriendFeed.

I have been aware of FriendFeed for several months, but never did much with it. I was not really sure what it provided, and I just did not have the time to explore it. But my interest built up as I saw more of the scientists whose newsfeeds I subscribe to begin to discuss their experiences with it. My interest increased seeing the mashups that were developing – such as the widgets that could connect a blog with FriendFeed comments, etc.

But I was still too busy and I was not sure if it was worth the time to figure out the best way to use it, what was required, etc. So my progression through the first stages was a little slow as I still did not really see how it would help me. There were no ‘local’ authorities of mine that had adopted it.

Then, just a few days ago, I got a lot of hits and the referer was a specific FriendFeed page about Science 2.0, where the website was being discussed. In fact, there was quite a conversation going on, one that I had to join. Now I began to see what could be really useful about FirendFeed.

So I actually raced through the last 2 steps very fast. Trial took about 2 minutes since FriendFeed is pretty straightforward and i was congratulating myself for the adoption stage even as I was writing my second comment.

All this would suggest that I am an early adopter. not an innovator. Which is what I expected. I needed some interactions with members of the community rather than hearing it from outside experts.

But this also indicates just how rapidly a new innovation can move if it finds the right path. Especially when there are conversations happening, information being exchanged,

People will adopt a new innovation really fast if there is a conversation about them or their research interest, and they want to be a part of the conversation. I would expect most scientists would plow right through the latter stages of the 5 steps if their research was directly influenced by the conversation.

Technorati Tags: ,

A five step process

I’ve mentioned some of the work by Everett Rogers on technology adoption. The bell curve seen refers to the adoption of innovations by a community. But what about individuals? Is there a process whereby they adopt new technology?

Turns out there is. You can read the work by George Beal and Joe Bohlen in 1957. There is a five step path that each individual appears to go through, although some people are slower to transition between steps.

  1. Awareness. The individual is simply aware the innovation exists.
  2. Interest. The individual wants more information. They begin to wonder if the innovation can help them.
  3. Evaluation. The individual mentally examines the innovation using the information gathered, trying to determine whether it will really impact their work.
  4. Trial. The individual actually tests the innovation to see if reality matches expectations.
  5. Adoption. The individual likes the innovation and adopts it wholeheartedly.

Beal and Bohlen also described what sources of information were used at each stage. Through the first two, mass media and government agencies were most important.

This was really an attempt to get an ‘unbiased’ viewpoint since friends and salesmen (saesmen always came in last) were the next two sources. But for the last 3 stages, neighbors and friends were the largest source of information, moreso than any other group.

So, early in the diffusion process, unbiased experts are sought. But when the evaluation process is started, the experiences of close ties within a local social network become the most important. For most people, the opinions and personal experiences of their friends are most important for adoption of a new innovation than any external source.


Diffusionofinnovation


Now the innovators in a community race through these steps. They often are connected to outside groups and use social interactions unavailable to others in the community to more rapidly move through the last 3 steps.

The early adopters take information from the innovators and use their own connections to move through the stages, not as fast as the innovators, but with reasonable speed.

But it is the majority of the community that relies on the early adopters and innovators within the community to inform themselves. Research has shown that they require much more information from trusted sources within the community than innovators and early adopters. Without this information from peers, they will not progress rapidly through the last 3 stages.

The laggards are the slowest to move through the 5 stages. They do not trust most outside sources, so the awareness and interest stages are slowed. Plus they will only listen to certain trusted sources within the community. Until those trusted sources make their own way through the 5 stages, the laggards will not progress.

So, to alter the rate of diffusion of innovation in a community, increased lines of communication must be available, increasing the information that can be provided to individuals.This helps with the first 2 steps. but mostly only for the 16% of the community at the left side of the curve.

However, of greatest importance are the connections between members within the community, particularly the thought-leaders found in the early adopters. About 70% of a community will not adopt new innovations unless they hear clear reasons why, from trusted individuals within the community.

No amount of salesmanship or external proof will easily move them. But, tgiven he right opinion from a community thought-leader and they will rapidly make the transition.

This is an area that Web 2.0 technologies can be of real value. Not only do they make it easier for members of a community to disburse information, they also help the community more accurately identify who is in each group, permitting more focused, explicit approaches to be used to move individuals through the 5 steps.

The thought-leaders can more rapidly progress through the stages and can extend their opinions much more rapidly to the majority because they are not required to be in the same place at the same time as the others in the group. Thus there will be more opportunities for their viewpoints to be assimilated by the majority.

Increasing the rate of diffusion of innovation in a community really means increasing the speed with which each individual progresses through the 5 step.

Technorati Tags: , , ,