Tag Archives: Open Access

Maybe because Alan Mullaly actually has built things

ford mustang by stevoarnold

Alan Mulally — Making Ford a Model for the Future
[Via HarvardBusiness.org]

Almost exactly a year ago, I wrote an article about why Ford has the potential to become a company of the future. It had just come off reporting a $14.6 billion loss for 2008, its fourth losing year in a row.

One year later, Ford reported a profit of $2.7 billion. Yesterday the company reported March sales up 40 percent. GM, by contrast, was up 22 per cent, and Chrysler was down 8.3 per cent.

There are many reasons Ford has achieved such an extraordinary turnaround since Alan Mulally took over as CEO in 2006. After observing him in action, talking with him and spending time with his senior team, I’m convinced Mulally is taking an old-school industrial company and turning it into a model of how a modern company ought to be run.

[More]

Perhaps because Mulally is an engineer who actually built things at Boeing, rather than just a sales/marketing MBA, he has a firm understanding of how to get people to do creative things, even at an automobile manufacturer.

Innovation, and the creativity that drives it, does not come from short term metrics and 9-5 mentalities. Mulally had a huge influence on Boeing’s success against Airbus and is now doing something similar with Ford.

I wrote about some of these approaches before. It looks like Mulally has continued on this path.

Some we have heard before. ‘Rally around a mission.’ ‘Long-term strategic planning.’ ‘Be fearless.’

All great aphorisms but execution is what makes them work. Observe how he creates a culture of truth-telling and transparency:

Finally, Mulally has created a culture in which telling the truth, however painful it may be, gets rewarded. Every Thursday morning, he presides over what he calls a “Business Plan Review.” The heads of Ford’s four profit centers around the world and its 12 functional gather to report on how well they’re meeting their targets and on any problems they’re having. They’re all in together.

To broaden transparency, Mulally invites outside guests to sit in on the meeting each week. The day I was there, one Ford executive described a significant shortfall on a key projection. No one cringed, including Mulally, and the executive calmly outlined his suggested solutions. Then he invited others to share their ideas.

Not only does he have everyone in it together and makes sure his own approach of finding solutions to problems, not blame, but he includes outsiders with no ax to grind or domain to defend. These observers provide a perspective that keeps the focus on finding answers.

And I bet they often ask naive questions that can sometimes explode into creative ideas.

I think that they have a great chance to adapt to the changing markets in ways others can not.

The world’s oldest profession provides modern insights

sugarloaf by Paul Mannix

Brazilian hooker-john hookups used for network analysis
[Via Ars Technica]

Modern communication networks, such as cell phone systems and the Internet, have provided researchers with the opportunity to study human associations and movement on a much greater scale than previously possible. Almost all of the papers that describe this sort of network analysis notes that it could have real world applications, since existing and emerging disease threats can spread through social and transit networks. A paper that will be released later this week by PNAS, however, skips the whole “this may be a useful model” aspect, and goes straight to a network in which diseases actually do spread: prostitutes and their clients.

Although organized prostitution is apparently illegal in Brazil, there are no laws against receiving payment for sex, making it possible for sex workers to freelance. Like everything else these days, that trade has found its way onto the Internet, and some enterprising Brazilians created an ad-supported public forum for individuals on both sides of the transaction. The forum is heavily moderated to keep it strictly on-topic: sellers (aka prostitutes) can advertise their business, and those that partake can rate the experience, as well as provide some information about the precise services rendered (the focus was strictly on heterosexual prostitution in this system).

[More]

Using the data generated by Web 2.0 technologies these researchers have been able to garner a lot of insight into a very large social network that has existed for some time.

This looks like it will be a pretty interesting article – Information dynamics shape the sexual networks of Internet-mediated prostitution. And you can download it for free.

These online forums map very well with the correlated social networks, providing a nice insight into how the networks are set up and how something like diseases might progress through the network.

It is also a network that is highly optimized to move information around – who is the best for doing whatever at whichever price. It is also a very large network, so they were able to identify some other interesting characteristics.

For example, social networks also alter over time. Because they had 6 years worth of data, the researchers could examine how the contacts changed over time. They found that there were still very large connected networks at all times, with a minimum of 71% of the people being connected in the network.

There were over 10,000 buyers and about 6600 sellers. The average number of jumps between buyers was about 5.8 (those 6 degrees of separation) while it was smaller for sellers (about 4.9). Also interestingly, was the high number of what are called four-cycles – a set of connections that end where they start. These are normally described as a mutual friend introducing two people, this creating a triangle. This seems to make sense to me – someone who has found a great prostitute telling his friends, for example.

Another interesting aspect of the network, and one that has implications for disease spread, is that it was slightly disassortative. In a highly assortative network, highly connected members also tend to connect to each other. In a disassortative network, highly connected members tend to connect to less highly connected members.

The data suggest that for this network the most active buyers, those with the most connections to prostitutes, tended to connect to prostitutes that were less active in the network (i.e. fewer connections). And the most popular sex workers tended to connect to buyers that were not actively seeking out other prostitutes.

This actually creates a network where disease is not likely to arise but when it does, it could spread to a larger part of the network.

Another intriguing observation they made is that on a log-log plot, the number of sex workers and buyers increases linearly as the size of the city increases. In many things (such as wealth or information workers), the trend is greater than linear because larger cities provide greater benefits. Linear scaling falls for things that are usually necessities, such as water or power.

Normally, prostitution requires face-to-face interactions, so being in a big city, with its increasing large social networks, makes it easier to find one. And thus harder to find one in a small town. But, the online form removes that need and now small towns can do just as well as large towns, bringing prostitution down to the level of human necessities.

Pretty nice examination of a somewhat specialized human social network, one that could only really be studied because of Web 2.0 technologies.

Getting news in the mobile connected world

So, I’m driving to the nearby Barnes and Noble to use their Wifi and get some work done. Plus I get a discount on their coffee. I get a voicemail on my iPhone from my Mom saying she hopes I’m not in downtown Seattle, that it looks like a real mess.

Not having a clue to what she was talking about, I checked Google News. I found a couple of articles like this one, about a man wandering around near the Courthouse with some sort of device on his arm. The police has him in custody and were examining the device.

Then I ran across this article which quoted a Police tweet about the incident:

In a tweet, Seattle police said, “Adult male in 300 block of James has made general threats against persons and property. He has taped an unknown device to his left hand.”

Whoa. I had not thought about that at all. You can follow the whole incident on their Twitter page! Here is a picture of the description so far:


seattle pd twitter

Jeez. They have a picture of the device online already! Who would have really thought 5 years ago that information about something like this could not only be readily available but that organizations, such as the police, would be on the front lines of providing it. we no longer need to wait for the evening newscast or the paper the next day to get informed.

And as I finish this, the Twitter feed states that the downtown streets have been reopened.

Filters lead us to wisdom

filters by aslakr
[2b2k] Clay Shirky, info overload, and when filters increase the size of what’s filtered
[Via Joho the Blog]

Clay Shirky’s masterful talk at the Web 2.0 Expo in NYC last September — “It’s not information overload. It’s filter failure” — makes crucial points and makes them beautifully. [Clay explains in greater detail in this two part CJR interview: 1 2]

So I’ve been writing about information overload in the context of our traditional strategy for knowing. Clay traces information overload to the 15th century, but others have taken it back earlier than that, and there’s even a quotation from Seneca (4 BCE) that can be pressed into service: “What is the point of having countless books and libraries whose titles the owner could scarcely read through in his whole lifetime? That mass of books burdens the student without instructing…” I’m sure Clay would agree that if we take “information overload” as meaning the sense that there’s too much for any one individual to know, we can push the date back even further.

[More

David Weinberger has been one of my touchstones ever since I read The Cluetrain Manifesto. I cried when I read that book because it so simply rendered what I had achingly been trying to conceptualize.

Dealing with information glut today leverages an old way of doing things in a new way. It uses synthesis rather than analysis. Analysis gave us the industrial revolution. Breaking the complex down into small understandable bits allowed us to create the assembly line that could put together our greatest creations, such as the Space Shuttle, with more than 2.5 million parts.

Yet a single O-ring can destroy the whole thing.

Synthesis brings together facts, allows us to see them in new ways. But to attack the really complex problems of today, we need to utilize synthesis from a wide range of viewpoints, all providing their own filter. As with the story of the 5 blind men and the elephant, no one person has all the information. But a synthesis of everyone’s information provides a reasonable approximation.

David discusses this view:

A traditional filter in its strongest sense removes materials: It filters out the penny dreadful novels so that they don’t make it onto the shelves of your local library, or it filters out the crazy letters written in crayon so they don’t make it into your local newspaper. Filtering now does not remove materials. Everything is still a few clicks away. The new filtering reduces the number of clicks for some pages, while leaving everything else the same number of clicks away. Granted, that is an overly-optimistic way of putting it: Being the millionth result listed by a Google search makes it many millions of times harder to find that page than the ones that make it onto Google’s front page. Nevertheless, it’s still much much easier to access that millionth-listed page than it is to access a book that didn’t make it through the publishing system’s editorial filters.

It is through synthesis that new technologies allow us to deal with information glut. And this synthesis necessarily involves human social networks. Because humans are exquisitely positioned to filter out noise and find the signal.

I’ve discussed the DIKW model. Data simply exists. Information happens when humans interact with the data. Transformation of information, both tacit and explicit, produces knowledge, which is the ability to make a decision, to take an action. Often that action is to start the cycle again, generating more data and so on.

This can be quite analytical in approach as we try to understand something. But the final link in the cycle, wisdom, is the ability to make the RIGHT decision. This necessarily require synthesis.

New technologies allow us to deal with much more data than before, generate more information and produce more knowledge. However, without synthetic approaches that bring together a wide range of human knowledge, we will not gain the wisdom we need.

Luckily, the same technologies that produce so much data also provide us with the tools to leverage our interaction with knowledge. If we create useful social structures, ones that properly synthesize the knowledge, that employ human social networks that act as great filters, then we can more rapidly compete the DIKW cycle and take the correct actions.




Updated: Short answers to simple questions

fail by Nima Badiey

NIH Funds a Social Network for Scientists — Is It Likely to Succeed?

[Via The Scholarly Kitchen]

The NIH spends $12.2 million funding a social network for scientists. Is this any more likely to succeed than all the other recent failures?

[More]

Fuller discussion:

In order to find an approach that works, researchers often have to fail a lot. That is a good thing. The faster we fail, the faster we find what works. So I am glad the NIH is funding this. While it may have little to be excited about right now, it may get us to a tool that will be useful.

As David mentions, the people quoted in the article seem to have an unusual idea of how researchers find collaborators.

A careful review of the literature to find a collaborator who has a history of publishing quality results in a field is “haphazard”, whereas placing a want-ad, or collaborating with one’s online chat buddies, is systematic? Yikes.

We have PubMed, which allows us to rapidly identify others working on research areas important to us. In many cases, we can go to RePORT to find out what government grants they are receiving.

The NIH site, as described, also fails to recognize that researchers will only do this if it helps their workflow or provides them a tool that they have no other way to use. Facebook is really a place for people to make online connections with others, people one would have no other way to actually find.

But we can already find many of the people we would need to connect to. What will a scientific Facebook have that would make it worthwhile?

Most social networking tools initially provide something of great usefulness to the individual. Bookmarking services, like CiteULike, allow you to access/sync your references from any computer. Once someone begins using it for this purpose, the added uses from social networking (such as finding other sites using the bookmarks of others) becomes apparent.

For researchers to use such an online resource, it has to provide them new tools. Approaches, like the ones being used by Mendeley or Connotea, make managing references and papers easier. Dealing with papers and references can be a little tricky, making a good reference manager very useful.

Now, I use a specific application to accomplish this, which allows me to also insert references into papers, as well as keep track of new papers that are published. Having something similar online, allowing me access from any computer, might be useful, especially if it allowed access from anywhere, such as my iPhone while at a conference.

If enough people were using such an online application then there could be added Web 2.0 approaches that could then be used to enhance the tools. Perhaps this would supercharge the careful reviews that David mentions, allowing us to find things or people that we could not do otherwise.

There are still a lot of caveats in there, because I am not really convinced yet that having all my references online really helps me. So the Web 2.0 aspects do not really matter much.

People may have altruistic urges, the need to help the group. But researchers do not take up these tools because they want to help the scientific community. They take them up because they help the researcher get work done.

Nothing mentioned about the NIH site indicates that it has anything that I currently lack.

Show me how an online social networking tool will get my work done faster/better, in ways that I can not accomplish now. Those will be the sites that succeed.


[UPDATE: Here is post with more detail on the possibilities.]

Mashing up

200910202344.jpg by foodistablog

One of the great things about openness and transparency is the ability for people to mash together various things to suit themselves. So, look at this:

Listening to: Death of an Interior Decorator from the album “Transatlanticism” by Death Cab For Cutie.

I added that with a single click in ecto, the blog editing software I use to create and publish posts. Ecto has a nice add-on that grabs the info from the song I am listening to and puts it in the post. I can set up templates with formatting so it has the links, etc. But the original template created Google search links. I simply remade the template so it links to iTunes.

I’m doing the same thing with Twitterfeed. This has allowed me to push blog posts from my different blogs (Spreading Science, Path to Sustainable and A Man with a PhD). Now I’m seeing if I can push posts to my Facebook account.

So, a simple posting can also copy the post to both Twitter ad Facebook. It looks like I do a lot but it all comes from simply clicking one button. That is what open APIs and other aspects of the web allow us to do.

It all makes it easier for the right people to get the right information at the right time.