No comment

At the risk of beating the issue to death, I offer yet another post on the question, “why don’t scientists comment on scientific articles?” Previous reflections stood within the larger context of scientific impact and article-level metrics, and I’ve also attempted some superficial analysis of commenting behavior at PLoS, BMJ, and BMC. More recently (and this is why the topic is on my mind again), a room full of bright minds at the PLoS Forum (including Cameron Neylon and Jon Eisen) scratched their heads over it and came up with pretty much the same conclusion as everyone else who’s ever thought about the problem — the costs simply outweigh the benefits.

The costs, in principle, are minimal. You might need to register for an account at the journal website and be logged on, but then all that’s needed is little more than what most of us already do multiple times a day with our email — type into a box and click “submit”. (In practice, there may be nonsensical, hidden costs that make you wonder what the folks at those journals were smoking.) So the perception that the cost-benefit equation doesn’t work speaks more to the lack of benefit than anything else.

Photo by jamesclay on flickr

Read more of this post

A brief analysis of commenting at BMC, PLoS, and BMJ

As announced on FriendFeed and Twitter, a writing collaboration between me and the inimitable Cameron Neylon has just been published at PLoS Biology, “Article-level metrics and the evolution of scientific impact”! (Loosely based on a blog post from several months ago.)

One of the many issues Cameron and I touched on was the problem of commenting. Most people probably aren’t aware of the problem; after all, commenting is alive and well on the internet in most places you look! But click over to PLoS or BioMed Central (BMC) and the comment sections are the digital equivalent of rolling tumbleweed.

As we mention briefly in the article, comments have great potential for improving science. For one thing, they’re a form of peer review, but without the month-long wait and seemingly arbitrary review criteria. Readers, authors, and other evaluators can also get a sense of what people think about the article. The ideal is certainly tantalizing — vigorous, rigorous debates over the finer scientific points as well as the overarching conclusions with participation both from experts in the field as well as informed laypeople, always with intelligence and civility!!!1!11!!one!! But let’s not kid ourselves — the worst-case scenario is all too easy to imagine and would probably look something like the discussions over at YouTube.

And this would be positively urbane. (From PhD comics)

Read more of this post

In memoriam: Warren DeLano



PyMOL has starred in many journal covers

On Tuesday, November 3rd, the scientific community suffered a great loss with the passing of Warren DeLano. Most people know him as the creator of PyMOL, a popular and extremely powerful molecular visualization tool, but most – including myself, until recently – may not know all of the other unique qualities that made Warren a mentor, collaborator, inspiration and friend to many. And by making PyMOL open source, Warren demonstrated his generosity and ensured that his work would continue to help future generations of scientists.
Read more of this post

The evolution of scientific impact

Photo by cudmore on Flickr

In science, much significance is placed on peer-reviewed publication, and for good reason. Peer review, in principle, guarantees a minimum level of confidence in the validity of the research, allowing future work to build upon it. Typically, a paper (the current accepted unit of scientific knowledge) is vetted by independent colleagues who have the expertise to evaluate both the correctness of the methods and perhaps the importance of the work. If the paper passes the peer-review bar of a journal, it is published.

Measuring impact

For many years, publications in peer-reviewed journals have been the most important measurement of someone’s scientific worth. The more publications, the better. As journals proliferated, however, it became clear that not all journals were created equal. Some had higher standards of peer-review, some placed greater importance on perceived significance of the work. The “impact factor” was thus born out of a need to evaluate the quality of the journals themselves. Now it didn’t just matter how many publications you had, it also mattered where.

But, as many argue, the impact factor is flawed. Calculated as the average number of citations per “eligible” article over a specific time period, it is highly inaccurate given that the actual distribution of citations is heavily skewed (an editorial in Nature by Philip Campbell stated that only 25% of articles account for 89% of the citations).  Journals can also game the system by adopting selective editorial policies to publish articles that are more likely to be cited, such as review articles. At the end of the day, the impact factor is not a very good proxy for the impact of an individual article, and to focus on it may be doing science – and scientists – a disservice.

In fact, any journal-level metric will be inadequate at capturing the significance of individual papers. While few dispute the possibility that high journal impact factors may elevate some undeserving papers while low impact factors may unfairly punish perfectly valuable ones, many still feel that the impact factor – or more generally, the journal name itself – serves as a useful, general quality-control filter. Arguments for this view typically stem from two things: fear of “information overload”, and fear of risk. With so much literature out there, how will I know what is good to read? If this is how it’s been done, why should I risk my career or invest time in trying something new?

What is clear to me is this – science and society are much richer and more interconnected now than at any time in history. There are many more people contributing to science in many more ways now than ever before. Science is becoming more broad (we know about more things) and more deep (we know more about these things). At the same time, print publishing is fading, content is exploding, and technology makes it possible to present, share, and analyze information faster and more powerfully.

For these reasons, I believe (as many others do) that the traditional model of peer-reviewed journals should and will necessarily change significantly over the next decade or so.

Article-level metrics at PLoS

The Public Library of Science, or PLoS, is leading the charge on new models for scientific publishing. Now a leading Open Access publisher, PLoS oversees about 7 journals covering biology and medicine as well as PLoS ONE, on track to become the biggest single journal ever. Papers submitted to PLoS ONE cover all areas of science and medicine and are peer-reviewed only to ensure soundness of methodology and science, no matter how incremental. So while almost every other journal makes some editorial judgment on the perceived significance of papers submitted, PLoS ONE does not. Instead, PLoS ONE leaves it to the readership to determine which papers are significant through comments, downloads, and trackbacks from online discussions.

Now 2 1/2 years old, PLoS ONE boasts thousands of articles and a lot of press. But what do scientists think of it? Clearly, enough think highly of it to serve on its editorial board or as reviewers, and to publish in it. Concerns that PLoS ONE constituted “lite” peer review seem largely unfounded, or at least outdated. Indeed, there are even tales of papers getting rejected from Science or Nature because of perceived scientific merit, getting published in PLoS ONE, and then getting picked up by Science and Nature’s news sections.

Yet there is still feeling among some that publishing in PLoS ONE carries little or no respectability. This is due in part to a misconception of how the peer review process at PLoS ONE actually works, but also in part because many people prefer an easy label for a paper’s significance. Cell, Nature, Science, PLoS Computational Biology – to most people, these journals represent sound science and important advances. PLoS ONE? It may represent sound science but it’s up to the reader to decide whether any individual paper is important.

Why is there such resistance to this idea? One reason may be tied to time and effort to impact: while citations always have taken some time to build up, a journal often provides a baseline proxy for the significance of a paper. A publication in Nature on your CV is an automatic feather in your cap, and easy for you and for your potential evaluators to judge. Take away the journal, and there is no baseline. For some, this is viewed as a bad thing; for others, however, it’s an opportunity to change how publications – and people – are evaluated.

Whatever the zeitgeist in particular circles, PLoS is clearly forging ahead. PLoS ONE’s publication rates continue to grow, such that people will eventually have to pay attention to papers published there even if they pooh-pooh the inclusive – but still rigorous – peer review policy. Recently, PLoS announced article-level metrics, a program to “provide a growing set of measures and indicators of impact at the article level that will include citation metrics, usage statistics, blogosphere coverage, social bookmarks, community rating and expert assessment.” (This falls under the broader umbrella of ‘post-publication peer review’.) Just how this program will work is a subject of much discussion, and certain metrics may need a lot of fine-tuning to prevent gaming of the system, but the growing consensus, at least among those discussing it online, is that it’s a step in the right direction.

Essentially, PLoS believes that the paper itself should be the driving force for significance, not the vehicle it’s in.

The trouble with comments

A major part of post-publication peer review such as PLoS’s article-level metrics is user comments. In principle, a lively and intelligent comment thread can help raise the profile of the article and engage people – whether it be other scientists or not – in a conversation about the science. This would be wonderful, but it’s also wishful thinking; as anyone who’s read blogs or visited YouTube knows, comment threads devolve quickly unless there is moderation.



For community-based knowledge curation efforts (think Wikipedia), there is also a well-known 90-9-1 rule: 90% of people merely observe, 9% make minor or only editorial contributions, and 1% are responsible for the vast majority of original content. So if your audience is only 100 people, you’ll be lucky if even one of them contributes. Indeed, experiments with wiki-based knowledge efforts in science have been rocky at best, though things seem to getting better. The big question remains:

But will the bench scientists participate? “This business of trying to capture data from the community has been around ever since there have been biological databases,” says Ewan Birney of the European Bioinformatics Institute in Hinxton, UK. And the efforts always seem to fizzle out. Founders enthusiastically put up a lot of information on the site, but the ‘community’ — either too busy or too secretive to cooperate — never materializes. (From a news feature in Nature last September on “wikiomics”.)

Thus, for commenting on scientific articles, we have essentially two problems: encouraging scientists to comment, and ensuring that the comments have some value. An experiment on article commenting on Nature several years ago was deemed a failure due to lack of both participation and comment quality. Even now, while many see the fact that ~20% of PLoS articles have comments as a success, others see it as a inadequate. Those I’ve talked to who are skeptical of the high volume nature of PLoS ONE tend also to view their comments on papers to be a highly valuable resource, one not to be given away for free in public but disclosed in private to close colleagues or leveraged for professional advancement through being a reviewer.

Perhaps the debate simply reflects different generational mindsets. After all, people are now growing up in a world where the internet is ubiquitous, sharing is second-nature, and almost all information is free. Scientific publishing is starting to change, and so it is likely that current incentive systems will change, too. Yet while the gulf will eventually disappear, it is perhaps at its widest point now, with vast differences in social norms, making any online discourse potentially fraught with unnecessary drama. As Bora Zivkovic mentions in a recent interview,

It is not easy, for a cultural reason, because a lot of scientist are not very active online and also use the very formalised language they are using in their papers. People who have been much more active online, often scientists themselves, they are more chatting, more informal. If they don’t like something they are going to say it in one sentence, not with seventeen paragraphs and eight references. So those two kinds of people, those two communities are eyeing each other with suspicion, there’s a clash of cultures. The first group sees the second group as rude. The second group views the first group as dishonest. I think it will evolve into something in the middle, but it will take years to get there.

When people point to the relative lack of comments on scientific papers, it’s important to point out the fact that online commenting has not been around in science for very long. And just as it takes time for citations to start trickling in for papers, it takes time to evaluate a paper in the context of its field. PLoS ONE is less than three years old. Bora notes, “It will take a couple of years, depends on the area of science until you can see where the paper fits in. And only then people will be commenting, because they have something to say.”

Brush off your bullshit detector

The last argument I want to touch on is that of journals serving as filter for information. With millions of articles published every year, it can seem a daunting task keeping up with the literature in your field. What should you read? In a sense, a journal is a classifier, taking in article submissions and publishing what it thinks are good and important papers. As with any classifier, however, performance varies, and is highly dependent on the input. Still, people have come to depend on journals, especially ones with established reputations, to provide this service.

Now even journals have become too numerous for the average researcher to track (hence crude measures like the impact factor). So when PLoS ONE launched, some assumed that it would consist almost entirely of noise and useless science, if it could be considered science at all. I think it’s clear that that’s not the case; PLoS ONE papers are indeed rigorously peer-reviewed, many PLoS ONE papers have already had great impact, and people are publishing important science there. Well, they insist, even if there’s good stuff in there, how am I supposed to find what’s relevant to me out of the thousands of articles they publish every year? And how am I supposed to know whether the paper is important or not if the editors make no such judgment?

Here, I would like to point out the many tools available for filtering and ranking information on the web. At the most basic level, Google PageRank might be considered a way to predict what is significant and relevant to your search terms. But there are better ways. Subscribing to RSS feeds (e.g. through GoogleReader) makes scanning lots of article titles quick and easy. Social bookmarking and collaborative filtering can suggest articles of interest based on what people like you have read. And you can directly tap into the reading lists of colleagues by following them on various social sharing services like Facebook, FriendFeed, Twitter, and paper management software like Mendeley. I myself use a loose network of friends and scientific colleagues on FriendFeed and Twitter to find interesting content from journals, news sites, and blog posts. The bonus is that you also interact with these people, networking at its most convenient.

The point is that there is a lot of information out there, you have to deal with it, and there are more and more tools to help you deal with it. It’s no longer sufficient to depend on only one filter, and an antiquated one at that. It may also be time to take PLoS’s lead and start evaluating papers on their own. Yes, it takes a little more work, but I think learning how to evaluate papers critically is a valuable skill that isn’t being taught enough. In a post about the Wyeth ghost-writing scandal, Thomas Levenson writes:

… the way human beings tell each other important things contains within it real vulnerabilities.  But any response that says don’t communicate in that way doesn’t make sense; the issue is not how to stop humans from organizing their knowledge into stories; it is how to build institutional and personal bullshit detectors that sniff out the crap amongst the good stuff.

From nitot on Flickr

From nitot on Flickr

Although Levenson was writing about the debate surrounding science communication and the media, I think there’s a perfect analogy to new ways of publishing. Any response that says don’t publish in that way doesn’t make sense; the issue is not how to stop people from publishing, it is how to build personal bullshit detectors – i.e. filters. People should always view what they read with a healthy dose of skepticism, and if we stop relying on journals, or impact factors, or worse to do all of our vetting for us, we’ll keep that skill nicely honed. At the same time, we are not in this alone; leveraging a network of intelligent agents – your peers – will go a long way.

So continue leading the way, PLoS. Even if not all of the experiments work, we will certainly learn from them, and keep the practice and dissemination of science evolving for the times.

Can’t attend ISMB 2009? The next best thing.

One of the biggest scientific conferences each year is Intelligent Systems for Molecular Biology (ISMB), put on by the International Society for Computational Biology (ISCB). I had the pleasure of attending the conference in Toronto last year, meeting many familiar names in person and collaborating with a number of them to microblog the sessions. That latter activity was so successful that it caught the eyes of the conference organizers, and we were able to publish a paper in PLoS Computational Biology summarizing the conference.

Even better, the ISCB is embracing microblogging from the outset this year at its ISMB meeting in Stockholm, which is starting this weekend and will run until July 2. They will be auto-generating threads for each talk in the FriendFeed room for live coverage and open commentary and are advertising that fact prominently on the website for those interested in blogging the event. Their actions are in stark contrast to those of Cold Spring Harbor, who recently updated their policies to require bloggers and twitterers to register with CSH beforehand and get advance permission from each presenter they plan on covering.

Now that blogging, microblogging, and even twittering is becoming more commonplace, it behooves conference organizers to have an official policy. Even one that is restrictive is better than no policy, which can result in an awkward backlash when people on both sides are caught unawares. Clearly there is no one-size-fits-all approach, but for conferences that do not deal with sensitive material, an open and even actively encouraging stance such as the ISCB’s is certainly liberating for those of us who are drawn to these kinds of activities.

So if you can’t attend ISMB this year for whatever reason, you (and I) are in luck. They’re freely providing the next best thing – live microblogging and a searchable archive of posts (through FriendFeed). Even if you’ll be physically attending, your experience will be arguably better if you follow the FriendFeed room. Because there’s only one of you, but there are also many others like you.

So check it out, whether you’re there or not, and if you’re there, contribute a post or two! If you’re not there, you can still participate by commenting and asking questions. That’s the beauty of it – the benefits go both ways!

What type of open notebook science are you? (Plus, more logos)

Photo by sararah on Flickr

Photo by sararah on Flickr

A scientist’s notebook is like an artist’s sketchbook mixed with captain’s logs. It can be extremely personal and yet it is the definitive record for both day to day scientific research and for higher-level brainstorming. It can be haphazardly disorganized or meticulously organized. But until electronic media came around, we were stuck with pasting pieces of paper alongside handwritten notes in stacks of bound notebooks or 3-ring binders – a pain not only to store but also to search through when you’re looking for how exactly you ran that particular experiment on that particular sample on that particular equipment.

While it’s not quite the norm yet, these days it’s not uncommon for people to use software such as wikis or journaling programs to record their everyday research activities. This has obvious advantages beyond legibility and saving trees; you can search your notes, link them to data files or figures, and back up multiple copies. You can tag and categorize entries, and the electronic files are automatically timestamped. Wikis, in particular, include versioning, so that any modifications you make to an entry are also recorded and timestamped.

These features should be a boon to any researcher, but there are some important “meta” benefits that can be yours (and ours) if you choose. Making things electronic lowers barriers to access and sharing. If you use a wiki or a blog to record your notes, you can choose to keep them online (useful for accessing from anywhere there’s an internet connection), and further, to make them public. At it’s logical extreme, this translates to “making the entire primary record of a research project publicly available online as it is recorded” along with all raw and processed data, the current definition of open notebook science (ONS) on Wikipedia. A number of scientists and labs practice and advocate ONS, including Jean-Claude Bradley at Drexel, Cameron Neylon at the ISIS Neutron Facility, and Gus Rosania at University of Michigan. They argue that the benefits – both to themselves and to the scientific community at large – far outweigh the risks.

Complete ONS obviously isn’t for everyone, but regardless of whether the practice becomes widely adopted, we should now be able to designate certain labs or notebooks of satisfying the definition of ONS. We can even designate partial ONS – whether all or only part of the content is available, and whether the content is made available immediately or after some time delay (usually for IP or publication purposes). Jean-Claude Bradley has broken down these types of ONS into a set of claims inspired by Creative Commons licenses along with initial logos created by Andy Lang.

The Creative Commons model is great for getting across the terms of your content quickly and unambiguously, so I am a big fan of this initiative. I would love to see more research notebooks online, and to see them displaying badges or banners identifying them as a type of ONS. I got so excited that I started making my own logos, which, happily, Jean-Claude and Andy Lang seem to like:

ons-patch1 ons-patch2

Two potential problems with these logos  that I can think of are:

  • whether it reads as “ons” rather than o-n-s (in which case perhaps uppercase would help),
  • the use of a beaker which could feel exclusive to those not in the experimental or life-sciences,

Incidentally, I made these images in Keynote (Apple’s version of Powerpoint), of all places. I simply couldn’t be bothered to fire up Adobe Illustrator with its bajillions of tools and palettes, and while I had to fudge a bit to get certain things to look right (my way of coloring in the beaker, for example, is hilariously crude), it was still pretty painless. Who knew?

I’ll be making more official mockups for Andy in the next day or two, so if anyone has additional feedback on these designs (or a different design entirely) I’d love to hear it!

I got a senior scientist blogging!

It’s already all over the intertubes by now, but I figured I should post it myself as well just to preserve it in my own blog archives:

My advisor, Russ Altman, and I won the “Get a senior scientist blogging” challenge sponsored by Nature!

Nature Network announced it today and there’s supposedly a press release as well. We’ve actually known for more than a week but had to keep it secret as they prepared the public announcement.

As our prize, Russ’s blog post on one of his first post-genomic moments will be in the Open Laboratory 2008 anthology and we will both get to go to SciFoo in August. Since we’re basically in Google’s back yard*, two other lucky people will get supported to attend SciFoo as well! Who doesn’t love 2-for-1 deals?

Some are curious, and indeed, so am I – what were some of the other front-runners? How many entries were received? Did the challenge actually get a significant number of scientists to start blogging? Either way, it would be nice to see some of the entrants just to add to my list of good science blogs.

Thanks to my blogging friends for the support, to Russ for taking my suggestion seriously, and to Nature Network for sponsoring the challenge! I’m definitely looking forward to SciFoo.

* At least, I hope to stay in Google’s back  yard. We’ll see how the job search goes. By the way, can I put this on my resume??

Little trouble on the Big Island: thoughts on organizing a workshop and tourist notes


View from the lobby at the Fairmont Orchid hotel

As some of you know, I helped organize a workshop on Open Science at this year’s Pacific Symposium on Bicomputing. I was grateful to have as my co-chair a certain Cameron Neylon, who has spent far longer pondering the issues we were going to discuss and has organized some similar meetings in the past. Cameron took on most of the meat work of the workshop for this reason – writing the introduction for the proceedings, giving the introduction to the session, moderating the panel discussion, and presenting the highlights to the rest of the conference. As for me, I tried my best to be useful!

Others have summarized the workshop nicely so I won’t be doing that here. (If you’re interested in viewing any of the slides or talks, you can access a full list of them from the workshop’s media page.) Instead, I’ll just reflect a bit on what it was like to organize a workshop for the first time, and how I found the Big Island of Hawai’i (hint: it’s a very unique place).

Hurry up and wait

Organizing something starting from almost a year ahead of time was an interesting exercise, marked by short bursts of furious activity followed by long stretches of inactivity. For most of the year, I wondered if I was supposed to be doing anything! But the reality is often that there simply isn’t anything to do until certain times, and then you have to get a lot of things done quickly. For us, we had about 5 flurry points: getting the proposal submitted, taking care of administrative stuff after the proposal was accepted, evaluating and responding to talk proposals, preparing the introduction for the proceedings, and running the actual workshop.

It’s all about the Benjamins

One very important task that didn’t quite fit the mold of hurry up and wait was fundraising. We started looking into this in the spring after our proposal was accepted, and didn’t really stop looking until early fall. One challenge was the nature of the workshop itself, which didn’t fit very neatly into existing categories. Most funding organizations have specific topic areas in which they will fund projects, but these tend to be domain-based, e.g. cancer or infectious disease. Others will fund individuals – minority scientists, for example – working in specific areas of research. We felt it might be difficult to obtain funding from the traditional granting agencies so we tried several companies and organizations associated with companies. Even with connections at some of them, our proposals didn’t get any consideration.

Our only successful proposal was to the Burroughs Wellcome Fund, which, ironically, is more of a traditional granting agency. I sent essentially identical proposals to two different topic areas, one of which was rejected twice – the program officer deleted it the first time thinking it was a conference invitation, and then formally rejected it the second time after I wrote again to inquire about it. (This was actually a problem I never really came up with a good solution for. What do you put as the subject line in an email so that it doesn’t sound like spam?) The other was the one that ended up getting funded. Nothing distinguished it from the other requests except where we sent it, and even here, I had to inquire more than once to make sure someone saw it.

In the end, the money had a huge impact. Not only did it allow me to go to the workshop (!) but we were able to fund several of our participants and had some money left over to cover an emergency (which happened, of course). Having even more money would have helped, because we could have used it to support Kaitlin Thaney from Science Commons. I learned several things:

  • Fundraising is one of the most important aspects of organizing an event
  • Look for funding in as many places as possible. Leverage your network but realize that it won’t always help you get money.
  • Apply to multiple places within one organization if you can. Sometimes the same proposal will get different consideration depending on whose desk it ends up on.
  • Be persistent. If you don’t hear back within a month, contact them again. Contact them at least twice, or until you get a response. If you get any leads at all, follow up – treat every lead like it’s the only one you might get. After all, it never hurts to have too much money, but you literally can’t afford not to have any.
  • (If applicable) Being a student has its pros and cons. On the one hand, you’re an unknown (and so might go straight into the trash folder); on the other hand, you present a sympathetic case. If you can actually get someone’s attention, they will probably understand that you really need the money and that you probably don’t have many other options. So again, be persistent, and emphasize the fact that you’re a student and the reason you need funds is for other students.

Murphy’s law is alive and well

img_9749Whatever can go wrong, will go wrong. And while rarely does everything go wrong, rarely does nothing go wrong. In our case, we had several of our headlining speakers, including the keynote, drop out at the last minute for various reasons and had to scramble to find a replacement. The workshop would probably have been fine even if we hadn’t found one, but we were extremely fortunate that Phil Bourne agreed to participate on 2 days notice, as he contributed a great deal of legitimacy as well as content to the workshop.

We were very lucky that 1) Phil Bourne was free, 2) plane fares dropped very temporarily, 3) we had some funds left over, and 4) there were still affordable vacancies at the nearby Hilton, despite them hosting a conference of their own.

Speaking of things going right, we had very few technical difficulties, even with the live webcasts, so maybe Murphy was too busy relaxing on the beach to bother us during the actual workshop. ;)

Lava, lava, everywhere

Mountain goats next to the road

Mountain goats next to the road

Love the contrast

Love the contrast

Mauna Loa? or maybe Mauna Kea

Mauna Loa? or maybe Mauna Kea

Lava as far as the eye can see

Lava as far as the eye can see

Waipio valley

Waipio valley

The Big Island of Hawai’i is strange, and wonderful. Bigger than all of the other islands put together, it is still very small by continental standards, as you can go from the west to the east in about an hour and north to south in probably about 2 hours. I can imagine a 7-10 day trip would be just about the right amount of time to explore the Big Island.

Given that there are five volcanoes (two of which are still active), the landscape is extremely varied. The central west coast is covered in piles of broken up lava rocks, making it look like a giant had recently tilled the ground with a huge shovel. Mountain goats almost the same color as the rocks perch on some of these piles. Along the ocean, however, the black lava rocks mingle with pure white wave-worn coral. One charming consequence of this is that instead of using spray paint to make graffiti, people use the white coral rocks to “paint” words and pictures on the dark rocky embankments along the roads.

As you go north, the rock piles give way to green swaths and rolling hills, all gently sloping down towards the ocean. In the center of the island are the high mounts of Mauna Loa and Mauna Kea, both of which sport snowy caps. In the valleys below them are broad expanses of young lava flows, shining black and rippled. When we visited this area, we alternately drove through thick fog, then bright sunshine and gusty winds, and finally rain.

On the northeast side of the island are the tropical forests and ridges, including the well-known Waipio valley. We got here too late to hike down into the valley, so we only went in about 1/3 of the way, but with a greater than 15% grade, it was still a workout!

At the southern end is Kilauea, the most active volcano on the island. Someone told me a story about how a small town was obliterated in the last big eruption, but some of the houses were completely spared. Lucky ducks, you think? Maybe not – it turns out those people couldn’t collect any insurance because there was no damage! Doesn’t matter if your house value just dropped to negative or if you can’t live there anymore because you’re surrounded by lava – the house is fine, so thanks for calling and have a nice life!

So much for volcano insurance

So much for volcano insurance

Anyway, Kilauea is still doing its thing, being hot and bothered and adding land mass to the island bit by bit. In the past, I’ve heard you could walk very close to the live lava flows, but these days the viewing areas are about half a mile from them, and so during the day all you can see is a huge column of roiling steam from the lava meeting the ocean. If you hike in close to sunset and stay after nightfall (headlamps or flashlights required), however, you can see a distinct red glow, and the occasional plume of sparks. Helicopters kept buzzing around the steam column, so if you can afford it, I bet the view from the helicopter is amazing.

Awesome column

Awesome column

img_99021 img_99474
img_99551 img_99692

There was much more to see but my stay was short and busy. If I ever go back, I’ll definitely be going up to the observatory on Mauna Kea, swimming with manta rays near Kona, snorkeling at Hapuna beach, and hiking around the northeast part of the island.

Taking conference reporting to a new level

ismb2008Who would have thought a year ago that we’d see an article in a major scientific journal about and inspired by microblogging? But indeed, PLoS Computational Biology published yesterday our report on the ISMB 2008 conference:

Saunders N, Beltrão P, Jensen L, Jurczak D, Krause R, et al. (2009) Microblogging the ISMB: A New Approach to Conference Reporting. PLoS Comput Biol 5(1): e1000263. doi:10.1371/journal.pcbi.1000263

What is FriendFeed?

FriendFeed is a web service for aggregating feeds from numerous other web services and for posting links and messages. Discussions often start up around these posts as people comment on them, providing a convenient, topic-based and searchable archive of conversations. You can post to specific “rooms”, such as we’ve started doing for meetings and (un)conferences (e.g. BioBarCamp, ScienceOnline ’09, PSB 2009) and for particular topic areas (e.g. The Life Scientists, Python for Bioinformatics). Cameron Neylon has written a longer primer on FriendFeed for scientists and Pawel Szczęsny also reflects on how scientists might interact with this tool.

The effort came out of a group of bloggers and Web 2.0 enthusiasts who contributed to the ISMB 2008 room on FriendFeed – many of whom had never before met in person. ISMB 2008 in Toronto seemed to be the first time scientists used FriendFeed to capture the content and activities associated with a conference. Once it happened, it became glaringly obvious how useful and convenient it was to use FF for this purpose.

The room became a place for people to record notes on the talks, which allowed people to attend sessions without completely missing out on all the others (an annoying problem at these larger conferences with multiple sessions going on simultaneously). People could also augment the talk notes with links to relevant papers or web resources, ask and answer questions, and provide different perspectives when several people were covering the same talk. For those unable to attend the conference, the FF room provided a way to learn about what was happening and interact through online discussion.

As the conference went on, it became clear that this online room represented perhaps the most comprehensive set of conference notes any of us had ever encountered, and would not have been possible without the collaborative effort. When Roland heard that the conference organizers were looking for reporters, he made the obvious connection and galvanized a group of us to gather our collection of notes into a summary document. After an initial brainstorming session at the conference, we went back to our separate corners of the globe and worked on the project virtually through Google Docs. Neil led the effort to see the document through to fruition, communicating with the ISMB organizers and the editors at PLoS Computational Biology.

So what might this mean for the future of conference reporting? We are already behind the times a bit as blogging is already a common component of news reporting in other areas, notably politics and sports. Twitter is becoming a popular outlet for real-time citizen reporting, especially in disaster events such as the Mumbai attacks (though not without some controversy). In our case, FriendFeed seemed to offer a useful compromise between the flexibility and speed of Twitter and the organization and discussion possible on blogs, making it a good way to gather “raw data” for conference reports.

After the successful alpha demonstration of the ISMB 2008 room, I find it likely that any conference with an attendee familiar with these FriendFeed rooms will start their own. But is this always a good idea? Conference organizers might not always view microblogging in a favorable light given the private nature of many conferences in the biomedical sciences – for example, the Cold Spring Harbor meetings have an explicit policy against recording the talks or events. Given how easy (and natural) it is now to share content with others, though, each conference ideally should have an explicit policy regarding social media-based coverage. In reality I’m pretty sure that very few do, so it may be prudent to proceed with caution: check with the organizers, encourage them to set an open policy or at least have a policy, and show them examples. Hopefully, with the help of this article in PLoS, they will be easily convinced that microblogging does indeed take conference reporting to a new – and desirable – level.

On the flip side of openness

In Clay Shirky’s Here Comes Everybody, he says, “Revolution doesn’t happen when society adopts new technologies – it happens when society adopts new behaviors.” His point is that while new technology is necessary for revolution, it is far from sufficient. The real shift occurs once the technology permeates society enough so that new behaviors come naturally. For Shirky, the revolution that is the social web came only after commoditization of the Internet and mobile messaging made connecting people on a global scale effortless and natural.

For proponents of nascent movements like open science, it is common to wonder why others can’t just change what they think or what they’re doing. We changed, after all. If only the incentive structure were different, we say, or if people could just understand why openness is better. If things would just change – why, we’d enter a golden age of science!

But changing people is difficult, and changing society even more so. While there can be immediately compelling reasons to change – skyrocketing gas prices, for example – broader, long-reaching change arises almost unconsciously. More importantly, it depends on the enabling technology becoming so familiar that the natural behavior is to use that technology.

For open science, there are clearly two sides to the coin. There is openness as a social construct – the willingness to be open – and there is openness as a technological construct – the ability to be open. Although some bold souls may embrace the former without clear demonstration of the latter, most people aren’t even aware that there is something to embrace. And without mature, ubiquitous technology, pleas to participate will go mostly unheeded.

Yes, some of the tools for making science more open are available. But, technologically speaking, we are still a long way from integrating them ubiquitously into the research process. Until we do, we cannot expect many scientists to go open. But as the technology improves, more scientists will go online to read papers, manage citations, and share files and write manuscripts with collaborators. And as more scientists go online to conduct aspects of their research, the technology will improve. When scientists can no longer imagine a better way to do science, we’ll know we have arrived.