How personal genomics is rocking the boat

I’ve been doing some reading on personal genomics, direct-to-consumer genetic tests, and personalized medicine lately, in an effort to steep myself in the science and issues prior to starting work in this field. Today, I read an opinion piece by R. J. Carlson titled “The disruptive nature of personalized medicine technologies: implications for the health care system,” [1] that was especially interesting. Rather than expound on the usual arguments for or against consumer genomics, it laid out several important areas where personalized medicine and genomics technologies would disrupt the current system, often with brutal honesty.

Clearly, one of these areas is private health insurance. Describing private health insurance as “a hybrid of economic ruthlessness and utilitarian social policy … is supposed to perform the social policy role that the public sector can’t or won’t, and
that is to ration,” Carlson points out some sobering scenarios. One is that the Genetic Information Non-discrimination Act (GINA) covers only the underwriting process, and does not guard against denial of coverage or steep increases in premiums once a genetically-suggested condition manifests. Another is the moral and social dilemma posed by the knowledge – on either side – that those with “demonstrably superior health” are subsidizing care for those with “known genetic risk”. And, given the increasing knowledge we’ll have about health risks, it would be ridiculous not to use any of it in designing insurance packages. Carlson doesn’t paint this as a negative thing, necessarily, but instead calls on public policy to “facilitate the constructive uses of these data by shaping financial and access reforms to the genomics medicine that is arriving.”

The debate over health insurance is fairly familiar, however. What Carlson makes very clear in the rest of the piece is that personal genomics takes medicine in a fundamentally different direction than where it has been going for the last half century. Traditional modern medicine has focused on mechanism and reductionism, finding what’s wrong and fixing it, and applying that knowledge to new cases of the same thing. We use the fact that humans are more or less similar to enact standards of care.

But personalized medicine focuses on the differences between people and treats every patient as a unique case. This leads to two natural consequences: it makes medical care more costly, and it renders the standardization of medical practice obsolete, if not impossible. Of course, personalized medicine could conceivably be more cost-effective through better preventative care, but this is only if significant effort goes towards realizing this potential. And although I hadn’t thought about personal genomics in the context of evidence-based medicine, it’s not hard to see the conflict:

There’s the rub: to be effective, a personalized medicine must build on our ever more definitive differences, defying standardization for the very long haul, if ever. Measuring quality in health care under a genomics model is crudely analogous to measuring automobile fuel efficiency when every automobile is assembled from a wide array of materially different but functionally interchangeable parts, performs differently on every trip, and changes in performance with the moods and capacities of every driver.

This article captures the nuances of some very interesting challenges facing health care in response to genomics technologies with a view that is both realistic and optimistic. Carlson recognizes that the era of medical paternalism is giving way to democratization of health information, and we must adapt our policies to reflect this. Indeed, he argues that without active and careful management of this process, we may very well sabotage our ability to reap any rewards from this technology.

Definitely worth a read, and worth thinking about.

[1] Carlson RJ. (2009) The disruptive nature of personalized medicine technologies: implications for the health care system. Public Health Genomics 12(3):180-184.
DOI: 10.1159/000189631 [PubMed] [Journal]


Scientific discourse as an epic FAIL

A post on FriendFeed pointed me to this blog post in Adventures in Ethics and Science discussing a particularly infuriating example of just how broken the current system of scientific publishing can be. The epic tale is presented by Prof. Rick Trebino in a PDF document (above) outlining “How to Publish a Scientific Comment in 123 Easy Steps”. This version includes his second addendum in which he gives many excellent (and some painfully obvious) suggestions for how to improve the system.

Here’s a preview:

1. Read a paper in the most prestigious journal in your field that “proves” that your entire life’s work is wrong.

2. Realize that the paper is completely wrong, its conclusions based entirely on several misconceptions.  It also claims that an approach you showed to be fundamentally impossible is preferable to one that you pioneered in its place and that actually works.  And among other errors, it also includes a serious miscalculation—a number wrong by a factor of about 1000—a fact that’s obvious from a glance at the paper’s main figure.

3. Decide to write a Comment to correct these mistakes—the option conveniently provided by scientific journals precisely for such situations.

6. Prepare further by writing to the authors of the incorrect paper, politely asking for important details they neglected to provide in their paper.

7. Receive no response.

15. Write a Comment, politely explaining the authors’ misconceptions and correcting their miscalculation, including illustrative figures, important equations, and simple explanations of perhaps how they got it wrong, so others won’t make the same mistake in the future.

16. Submit your Comment.

17. Wait two weeks.

18. Receive a response from the journal, stating that your Comment is 2.39 pages long. Unfortunately, Comments can be no more than 1.00 pages long, so your Comment cannot be considered until it is shortened to less than 1.00 pages long.

20. Remove all unnecessary quantities such as figures, equations, and explanations.  Also remove mention of some of the authors’ numerous errors, for which there is now no room in your Comment; the archival literature would simply have to be content with a few uncorrected falsehoods.  Note that your Comment is now 0.90 pages.

21. Resubmit your Comment.

22. Wait two weeks.

23. Receive a response from the journal, stating that your Comment is 1.07 pages long. Unfortunately, Comments can be no more than 1.00 pages long, so your Comment cannot be considered until it is shortened to less than 1.00 pages long.

And so the saga begins. Really, the whole thing makes my blood boil.

Fun Mac OS X command: say

Group meetings in the Altman lab often kick off with a Unix or computing tip. These range from examples of built-in but lesser known utilities that make our lives at the command line easier, to scripting hacks, to full-fledged applications you download and install.

At the last group meeting I attended, the presenter showed us a fun little command that comes with Mac OS X, called ‘say’. This command basically does what you think it does – it says whatever comes after it. Here’s a simple example:

shwu$ say hello world

The default voice is whatever is set as the default in your system (usually a female, unless you’ve changed it), but there are many others you can use by setting the -v parameter:

shwu$ say -v Agnes "this is another woman's voice"
shwu$ say -v Bruce "this is a man's voice"

Some are especially fun, like “Bad News”, Bubbles, “Pipe Organ”, Trinoids, and Zarvox. Others are a little weird, like Albert and Whisper. And then there are ones you just shouldn’t use if you’re home alone at night – Hysterical and Deranged, for example. A more complete list can be found here.

The ‘say’ command isn’t just for amusing yourself, though the tricks you could play on people remotely are endless. You can also use it in conjunction with other commands or in scripts:

shwu$ python -c "print 'stuff'" && say done printing stuff || say you have a bug in your script

will say ‘done printing stuff’, whereas if I’d left out one of the single quotes in the python command it would have said ‘you have a bug in your script’ instead. This is great for when you start a script running and turn your attention to YouTube videos other work, but want to be notified when your script either finishes or encounters an error.

Bench scientists can get in on the fun, too. Suppose you have a complicated pipetting protocol that specifies different amounts of different things in different places. A long list can be cumbersome to print out or read, so why not ‘say’ it instead? (Actually, while you can specify a file for it to say using -f, I’m not sure  how you would specify pauses if you had it read your aliquots from a text file… so you might need to create a script that wraps all the aliquot amounts in ‘say’ commands with pauses in between, and then put all that in another script… anyway, it would be pretty cool and all your lab mates would be jealous. Or maybe they’d just think you’re strange.)

A community searching for a home

The big news all over the intertubes yesterday was Facebook’s acquisition of FriendFeed, a life-stream aggregator and discussion platform. Reactions were all over the place, from “Congrats! This is a great move for you guys!” to “Whatever, it makes financial sense…” to “Oh NoESssss!!! 1 <3 FF!1!! Fb is the worsts!!1!!!11!eleventy!!1!” The move prompted immediate debate amongst the science community and even spurred one member to quit FF 3 hours later, though all the lamenting might be premature. Paul Buchheit, one of FF’s developers, assured everyone that FF users and community would be treated right.

Still, it’s hard not to let the imagination run rampant with thoughts of a Facebook+FriendFeed frankenstein (FriendBook? FaceFeed? FriendFace?). Sean Percival created a nice mock of what such a mashup might look like (go to his page to see full size):

Jokes aside, there’s a chance that whatever solution is presented for current FF users will not satisfy a large fraction of us. For one thing, Facebook is oriented around fundamentally different goals than FriendFeed. Facebook is about connecting to people you share some some relationship with – you went to school together, work for the same company, are family members, etc – and letting them know what’s going on in your life, no matter how banal. That’s fine, and serves that purpose well. FriendFeed, however, has always been less about who you already know and what you’re doing, and more about what you think and what you find interesting. These connections made through common activities and interests online are real and often help initiate connections in the physical world. Facebook, in the eyes of many hardcore FF users, is that awkward high school reunion while FriendFeed is the stimulating group of people you met as part of the XYZ club in college.

Already, a FF group has spawned to discuss the details behind developing an open source version of FriendFeed. It will be interesting to see what they come up with, but just as interesting will be to observe the real-time development of a dynamic grassroots effort.

Cameron also has a great post outlining the differences between Facebook and FriendFeed, and the major directions the science/research community could take from here.

The evolution of scientific impact

Photo by cudmore on Flickr

In science, much significance is placed on peer-reviewed publication, and for good reason. Peer review, in principle, guarantees a minimum level of confidence in the validity of the research, allowing future work to build upon it. Typically, a paper (the current accepted unit of scientific knowledge) is vetted by independent colleagues who have the expertise to evaluate both the correctness of the methods and perhaps the importance of the work. If the paper passes the peer-review bar of a journal, it is published.

Measuring impact

For many years, publications in peer-reviewed journals have been the most important measurement of someone’s scientific worth. The more publications, the better. As journals proliferated, however, it became clear that not all journals were created equal. Some had higher standards of peer-review, some placed greater importance on perceived significance of the work. The “impact factor” was thus born out of a need to evaluate the quality of the journals themselves. Now it didn’t just matter how many publications you had, it also mattered where.

But, as many argue, the impact factor is flawed. Calculated as the average number of citations per “eligible” article over a specific time period, it is highly inaccurate given that the actual distribution of citations is heavily skewed (an editorial in Nature by Philip Campbell stated that only 25% of articles account for 89% of the citations).  Journals can also game the system by adopting selective editorial policies to publish articles that are more likely to be cited, such as review articles. At the end of the day, the impact factor is not a very good proxy for the impact of an individual article, and to focus on it may be doing science – and scientists – a disservice.

In fact, any journal-level metric will be inadequate at capturing the significance of individual papers. While few dispute the possibility that high journal impact factors may elevate some undeserving papers while low impact factors may unfairly punish perfectly valuable ones, many still feel that the impact factor – or more generally, the journal name itself – serves as a useful, general quality-control filter. Arguments for this view typically stem from two things: fear of “information overload”, and fear of risk. With so much literature out there, how will I know what is good to read? If this is how it’s been done, why should I risk my career or invest time in trying something new?

What is clear to me is this – science and society are much richer and more interconnected now than at any time in history. There are many more people contributing to science in many more ways now than ever before. Science is becoming more broad (we know about more things) and more deep (we know more about these things). At the same time, print publishing is fading, content is exploding, and technology makes it possible to present, share, and analyze information faster and more powerfully.

For these reasons, I believe (as many others do) that the traditional model of peer-reviewed journals should and will necessarily change significantly over the next decade or so.

Article-level metrics at PLoS

The Public Library of Science, or PLoS, is leading the charge on new models for scientific publishing. Now a leading Open Access publisher, PLoS oversees about 7 journals covering biology and medicine as well as PLoS ONE, on track to become the biggest single journal ever. Papers submitted to PLoS ONE cover all areas of science and medicine and are peer-reviewed only to ensure soundness of methodology and science, no matter how incremental. So while almost every other journal makes some editorial judgment on the perceived significance of papers submitted, PLoS ONE does not. Instead, PLoS ONE leaves it to the readership to determine which papers are significant through comments, downloads, and trackbacks from online discussions.

Now 2 1/2 years old, PLoS ONE boasts thousands of articles and a lot of press. But what do scientists think of it? Clearly, enough think highly of it to serve on its editorial board or as reviewers, and to publish in it. Concerns that PLoS ONE constituted “lite” peer review seem largely unfounded, or at least outdated. Indeed, there are even tales of papers getting rejected from Science or Nature because of perceived scientific merit, getting published in PLoS ONE, and then getting picked up by Science and Nature’s news sections.

Yet there is still feeling among some that publishing in PLoS ONE carries little or no respectability. This is due in part to a misconception of how the peer review process at PLoS ONE actually works, but also in part because many people prefer an easy label for a paper’s significance. Cell, Nature, Science, PLoS Computational Biology – to most people, these journals represent sound science and important advances. PLoS ONE? It may represent sound science but it’s up to the reader to decide whether any individual paper is important.

Why is there such resistance to this idea? One reason may be tied to time and effort to impact: while citations always have taken some time to build up, a journal often provides a baseline proxy for the significance of a paper. A publication in Nature on your CV is an automatic feather in your cap, and easy for you and for your potential evaluators to judge. Take away the journal, and there is no baseline. For some, this is viewed as a bad thing; for others, however, it’s an opportunity to change how publications – and people – are evaluated.

Whatever the zeitgeist in particular circles, PLoS is clearly forging ahead. PLoS ONE’s publication rates continue to grow, such that people will eventually have to pay attention to papers published there even if they pooh-pooh the inclusive – but still rigorous – peer review policy. Recently, PLoS announced article-level metrics, a program to “provide a growing set of measures and indicators of impact at the article level that will include citation metrics, usage statistics, blogosphere coverage, social bookmarks, community rating and expert assessment.” (This falls under the broader umbrella of ‘post-publication peer review’.) Just how this program will work is a subject of much discussion, and certain metrics may need a lot of fine-tuning to prevent gaming of the system, but the growing consensus, at least among those discussing it online, is that it’s a step in the right direction.

Essentially, PLoS believes that the paper itself should be the driving force for significance, not the vehicle it’s in.

The trouble with comments

A major part of post-publication peer review such as PLoS’s article-level metrics is user comments. In principle, a lively and intelligent comment thread can help raise the profile of the article and engage people – whether it be other scientists or not – in a conversation about the science. This would be wonderful, but it’s also wishful thinking; as anyone who’s read blogs or visited YouTube knows, comment threads devolve quickly unless there is moderation.



For community-based knowledge curation efforts (think Wikipedia), there is also a well-known 90-9-1 rule: 90% of people merely observe, 9% make minor or only editorial contributions, and 1% are responsible for the vast majority of original content. So if your audience is only 100 people, you’ll be lucky if even one of them contributes. Indeed, experiments with wiki-based knowledge efforts in science have been rocky at best, though things seem to getting better. The big question remains:

But will the bench scientists participate? “This business of trying to capture data from the community has been around ever since there have been biological databases,” says Ewan Birney of the European Bioinformatics Institute in Hinxton, UK. And the efforts always seem to fizzle out. Founders enthusiastically put up a lot of information on the site, but the ‘community’ — either too busy or too secretive to cooperate — never materializes. (From a news feature in Nature last September on “wikiomics”.)

Thus, for commenting on scientific articles, we have essentially two problems: encouraging scientists to comment, and ensuring that the comments have some value. An experiment on article commenting on Nature several years ago was deemed a failure due to lack of both participation and comment quality. Even now, while many see the fact that ~20% of PLoS articles have comments as a success, others see it as a inadequate. Those I’ve talked to who are skeptical of the high volume nature of PLoS ONE tend also to view their comments on papers to be a highly valuable resource, one not to be given away for free in public but disclosed in private to close colleagues or leveraged for professional advancement through being a reviewer.

Perhaps the debate simply reflects different generational mindsets. After all, people are now growing up in a world where the internet is ubiquitous, sharing is second-nature, and almost all information is free. Scientific publishing is starting to change, and so it is likely that current incentive systems will change, too. Yet while the gulf will eventually disappear, it is perhaps at its widest point now, with vast differences in social norms, making any online discourse potentially fraught with unnecessary drama. As Bora Zivkovic mentions in a recent interview,

It is not easy, for a cultural reason, because a lot of scientist are not very active online and also use the very formalised language they are using in their papers. People who have been much more active online, often scientists themselves, they are more chatting, more informal. If they don’t like something they are going to say it in one sentence, not with seventeen paragraphs and eight references. So those two kinds of people, those two communities are eyeing each other with suspicion, there’s a clash of cultures. The first group sees the second group as rude. The second group views the first group as dishonest. I think it will evolve into something in the middle, but it will take years to get there.

When people point to the relative lack of comments on scientific papers, it’s important to point out the fact that online commenting has not been around in science for very long. And just as it takes time for citations to start trickling in for papers, it takes time to evaluate a paper in the context of its field. PLoS ONE is less than three years old. Bora notes, “It will take a couple of years, depends on the area of science until you can see where the paper fits in. And only then people will be commenting, because they have something to say.”

Brush off your bullshit detector

The last argument I want to touch on is that of journals serving as filter for information. With millions of articles published every year, it can seem a daunting task keeping up with the literature in your field. What should you read? In a sense, a journal is a classifier, taking in article submissions and publishing what it thinks are good and important papers. As with any classifier, however, performance varies, and is highly dependent on the input. Still, people have come to depend on journals, especially ones with established reputations, to provide this service.

Now even journals have become too numerous for the average researcher to track (hence crude measures like the impact factor). So when PLoS ONE launched, some assumed that it would consist almost entirely of noise and useless science, if it could be considered science at all. I think it’s clear that that’s not the case; PLoS ONE papers are indeed rigorously peer-reviewed, many PLoS ONE papers have already had great impact, and people are publishing important science there. Well, they insist, even if there’s good stuff in there, how am I supposed to find what’s relevant to me out of the thousands of articles they publish every year? And how am I supposed to know whether the paper is important or not if the editors make no such judgment?

Here, I would like to point out the many tools available for filtering and ranking information on the web. At the most basic level, Google PageRank might be considered a way to predict what is significant and relevant to your search terms. But there are better ways. Subscribing to RSS feeds (e.g. through GoogleReader) makes scanning lots of article titles quick and easy. Social bookmarking and collaborative filtering can suggest articles of interest based on what people like you have read. And you can directly tap into the reading lists of colleagues by following them on various social sharing services like Facebook, FriendFeed, Twitter, and paper management software like Mendeley. I myself use a loose network of friends and scientific colleagues on FriendFeed and Twitter to find interesting content from journals, news sites, and blog posts. The bonus is that you also interact with these people, networking at its most convenient.

The point is that there is a lot of information out there, you have to deal with it, and there are more and more tools to help you deal with it. It’s no longer sufficient to depend on only one filter, and an antiquated one at that. It may also be time to take PLoS’s lead and start evaluating papers on their own. Yes, it takes a little more work, but I think learning how to evaluate papers critically is a valuable skill that isn’t being taught enough. In a post about the Wyeth ghost-writing scandal, Thomas Levenson writes:

… the way human beings tell each other important things contains within it real vulnerabilities.  But any response that says don’t communicate in that way doesn’t make sense; the issue is not how to stop humans from organizing their knowledge into stories; it is how to build institutional and personal bullshit detectors that sniff out the crap amongst the good stuff.

From nitot on Flickr

From nitot on Flickr

Although Levenson was writing about the debate surrounding science communication and the media, I think there’s a perfect analogy to new ways of publishing. Any response that says don’t publish in that way doesn’t make sense; the issue is not how to stop people from publishing, it is how to build personal bullshit detectors – i.e. filters. People should always view what they read with a healthy dose of skepticism, and if we stop relying on journals, or impact factors, or worse to do all of our vetting for us, we’ll keep that skill nicely honed. At the same time, we are not in this alone; leveraging a network of intelligent agents – your peers – will go a long way.

So continue leading the way, PLoS. Even if not all of the experiments work, we will certainly learn from them, and keep the practice and dissemination of science evolving for the times.

SciBar, SciFoo, Sci woot

I’m not sure what the statute of limitations is on foo bar write-ups but I distinctly feel late to the party. Maybe it’s because the chattersphere starts buzzing the day before and doesn’t stop until two days after (see SciBarCamp FriendFeed room, twitter searches for #scifoo and #sbcpa). Maybe it’s because people had blog posts up before the first day even sank in (see the list of SciFoo blog posts). Either way, I’m hoping that this counts as fashionably late.

For those who have no idea what I’m talking about, the gist is that I went to two unconferences last week/end, SciBarCamp and SciFoo. An unconference is essentially a gathering of people around a common theme with no fixed schedule except what those people devise the day of. The goal is to get smart people talking to each other about their ideas, catalyzing new collaborations. As you might guess from the names, both SciBarCamp and SciFoo are loosely organized around science. Having just come back from a 10 day trip to Boston, Bainbridge Island (WA), and Seattle which included 5 days of frisbee tournaments, I only mustered the energy to microblog one session at SciBarCamp, and kept only a sparse paper notebook at SciFoo. Here, I’ll just try to capture some of the thoughts I had about these experiences.

First, SciBarCamp. This year’s event was spearheaded by Jamie McQuay, with John Cumbers, Jim Hardy, Chris Patil, and myself rounding out the organizing committee. The topics, as always, ranged all over, but personal genomics and “health 2.0” seemed especially popular. Some of the more memorable topics were “Psychedelics – WTF?”, sustainable technologies in Afghanistan, and “Spinning Science” in the media, by Dr. Kiki of “This Week In Science” and Naomi Most of Pirate Cat radio. I met a number of folks in person whom I’d only known through the intarwebs – Martin Fenner, Duncan Hull, Andy Lang, Bosco Ho, among others. I also met Brian Malow, science comedian, who happens to have been a good friend of my favorite comedian, the late Mitch Hedberg.

After SciBarCamp ended, I had a short break until Friday evening, when SciFoo kicked off with registration and dinner at the Googleplex. Unlike SciBarCamp, SciFoo is backed by Google, Nature Publishing Group, and O’Reilly, so it is both bigger, fancier, and celebritier. It is also invite only. (How did I manage to swing an invite? I got a senior scientist blogging!) Rubbing elbows with Nobel prize winners, best selling authors, and famous inventors can certainly be intimidating. Unfortunately, Bjork never showed up; fortunately, the schwag was plentiful, including a table full of books (many by O’Reilly), moleskine type notebooks, holographic periodic tables, a puzzle by Pavel, and the requisite t-shirt and conference bag.

Though the sessions can be hit or miss (either the topic ends up being different from what you expected, or people may bring personal agendas), all are guaranteed to make you think. I attended sessions on virtual worlds (by Andy Lang and Berci Mesko), cartoon physics and art in Pixar, tricking people into liking science, digital identity (by Duncan Hull), space travel, personalized medicine, and “aliens on Earth”. Some brief thoughts on some of these:

Virtual worlds – Andy presented examples of how Second Life is being used to visualize scientific data, improve distance learning, and hold virtual conferences. Some problems are the fact that Second Life is limited to 15,000 “prims” (graphical units), which precludes detailed representations of complex molecules, but Second Life has been shown to be very useful in education. Berci presented a new alternative to Second Life called VisuLand that may be better suited towards medical research and teaching which requires no download or installation and is currently free to use.

Tricking people into liking science – Run by Jorge Cham, John Rennie, and Mariette DiChristina. The disconnect between science, media, and general public, and how to redress it, was a common theme at both SciBarCamp and SciFoo. At SciFoo, one of the session leaders suggested that the reasons people claim to be uninterested in science fall into two categories: problems of interest (“it’s boring”, “it’s irrelevant”, “it’s hard”, etc), and problems of inferiority (“I’m not smart enough”, “it’s not for someone like me”, “it doesn’t match what I believe”, etc). If you can frame the problem correctly, you can form a better solution. For example, if the problem is that it requires too much sustained concentration (i.e. “it’s hard”), then present the science in shorter snippets. If the problem is that the audience feels intellectually inferior, then reassure them that the material isn’t supposed to easy. I think the most important ideas we can convey to non-scientists are that science is about solving mysteries, failure is part of the process, and science is intrinsically a human endeavor. In fact, I started thinking that instead of tricking people into liking science, we should get more people to like the scientists themselves.

Space travel – Esther Dyson came in after a zero gravity flight earlier in the morning to tell us about her training as a backup for a future shuttle launch. Eric Anderson, the CEO of commercial space travel company Space Adventures, was also present, as was astronaut Ed Lu. The session was mostly a show and tell Q&A style between Esther/Eric/Ed and everyone else who wasn’t involved in space flight. Towards the end, a debate began over the merits of space research programs. On one side (mostly devil’s advocates but a couple true adherents perhaps) were those arguing that funding space research was frivolous and a waste of money given more immediate problems plaguing the world like poverty and disease; on the other were those convinced that the space program single handedly inspires future generations to study science and dream big, that such aspirations are what lead to innovation and keep us human. An impassioned discussion that literally started 5 minutes before the session ended.

Art and science and cartoon physics – Rob Cook from Pixar gave a crowd-pleasing presentation on how Pixar has used physics and math to achieve incredibly realistic animations of moving objects and materials, both living and non-living. One thing I particularly liked was the cycle he presented between art and technology. There is an important relationship between the two departments at Pixar, as the artists tend to suggest things they don’t know are impossible, and the tech folks are too proud to say it can’t be done. The end result is innovation. I think it’s especially useful for this relationship to extend beyond Pixar to encompass science and non-science, as one provides the story and the other makes it possible.

Aliens on EarthNathan Wolfe, a leading researcher in infectious disease and viruses, stimulated a debate over whether we might find aliens on Earth. He defined “alien” not as extraterrestrial, but as “distinct” from our current understanding of life as DNA-based. People got a little hung up on this definition at the beginning but eventually got over it only to get a little hung up on what we mean by “life”. Does it need to replicate? Does it need to take in information? Nathan is convinced that we’re far more likely to discover “alien” life on Earth than we are to discover life elsewhere in the universe (I’m inclined to agree), and is wondering what it might “look like”. Iddo wrote a great post about this topic a while ago, and he provides a nice overview of the question and a possible theory. The big question is: if there is alternate life on Earth, would we be able to recognize it?

All in all, it was great to meet so many accomplished and intelligent people. A short list of some of the people I met: Jorge Cham (author of PhD comics), Brandyn Webb (actually met at SciBarCamp first), Ariel Waldman (also met at SciBarCamp) of, Timo Hannay (Nature), Bruce Hood (author of SuperSense), John Gilbey (also met at SciBarCamp), Alf Eaton (Nature), George Church of Knome and the Personal Genome Project, Paul Biondich and Burke Mamlin of OpenMRS, Chris Holmes (of GeoServer, I think), Reshma Shetty of Gingko Bioworks, Emily Chenette and Erika Check Hayden (Nature), Juliana Rotich, and more. I did not meet Bill Nye.

But SciFun didn’t end with SciFoo. On Monday, I went with Brandyn Webb and Dan Barcay (who was also at SciFoo and works on Google Earth) to visit fellow SciFoo-er Simon Quellen Field, who now spends his time making science toys on his farm in Los Gatos. All Brandyn had to say was “parrot farm” and I was there. Indeed, there were many many parrots and exotic birds of all kinds, dozens of chickens, about seven goats, and two alpacas in addition to the usual dog and cat. The goats and alpacas nominally serve to keep the brush nibbled down as a fire break but are also lots of fun to look at. After meeting the animals, we trekked across a catenary bridge to his tree house, beyond which lay rope netting stretched 30 feet off the ground between several trees. Another rope bridge beyond that led to a second tree house still in the making.

Inside the house lay even more treasures, all sorts of toys and gadgets to captivate minds of all ages. Since we’d missed his liquid nitrogen and helium demonstration at SciFoo, he let us play with some on his driveway. We admired the Sun Oven, which acts like a slow cooker using only heat generated from the sun. And we of course took in the breathtaking view of the Lexington resevoir from the air chairs on his deck (I didn’t have my camera, otherwise this post would be littered with photos). Mac Cowell (of DIYBio and 100ideas) and John Cumbers (from SciBarCamp) showed up later and started off another tour of the premises accompanied by Simon’s copious knowledge and experiences.

Many hours later, we bade our temporary goodbyes. Temporary because we left with more opportunities to meet up and commune over science than before. Simon hosts a science or inventors’ type meetup at the City Pub in Redwood City on Wednesdays, and suggested a semi-regular SciFoo-type potluck gathering which would be lots of fun as well.

Long story short – SciBarCamp and SciFoo have shown me that there is tons of cool sciencey stuff going on in the Bay Area and in the world and I feel like I’ve only scratched the surface.

Jokes only a geek could love?

A while back I compiled a list of jokes that pass my criteria for supreme cheesiness. I had a bit of a rough start yesterday and could use a laugh (indeed, who doesn’t?), so I figured now is as good a time as any to share them. For the scientifically inclined, do don’t worry – there are plenty of jokes here for you.

Courtesy of Chris Lasher:

A bioinformatician walks into a bar. The bartender asks, “GATCGCATCAATAAA?” The bioinformatician replies, “I’m going to need a translation.”

From Ricardo Vidal:

Two antennas met on a roof, fell in love and got married. The ceremony wasn’t much, but the reception was excellent.

A man walks into a bar with a slab of asphalt under his arm, and says: ‘A beer please, and one for the road.’

From Neil Saunders:

Mushroom in bar: “A round of drinks for everyone!” Customer: “Well, he seems like a fun guy.”


There are 10 types of people in the world: those who understand binary, and those who don’t.

Two hydrogen atoms walk into a bar.
One says, “I think I’ve lost an electron.”
The other says, “Are you sure?”
The first replies, “Yes, I’m positive…”

(a variation: A neutron walks into a bar and orders a drink. Upon being asked the price, the bartender responded, “For you? No charge.”)

If you’re not part of the solution, you’re part of the precipitate!

Student: Did you know diarrhea is hereditary?
Teacher: Well, actually it isn’t.
Student: Yes, it is, it runs in your genes.

How many ADHD kids does it take to screw in a lightbulb?

Knock knock.
Who’s there?
HIPAA who?
I can’t tell you!

Then, just when you thought it couldn’t get any worse (better?), we have the pick-up lines:

I wish I could be your derivative so I could be tangent to your curves.

Hey babe, wanna see the exponential growth of my natural log?

Baby, I know my chemistry, and you’ve got one significant figure.

If I were an enzyme I’d be DNA Helicase so I could unzip your genes.

Hey, baby; wanna test the ‘k’ of my bedsprings?

Are you the square root of 2? Because I feel irrational when I am around you.

How can I know so many hundreds of digits of pi and not the digits of your phone number?

Also, some good jokes from Physics Buzz.

And no post about science comedy would be complete without a mention of Earth’s premier science comedian, Brian Malow! I had the pleasure of hearing some of his act at SciBarCamp and it’s first rate. I especially liked this bit about time travel, from which I’ll paraphrase just a snippet:

When I meet people, I like to ask “when are you from?” instead of “where are you from?” in hopes that I’ll trip one of them up. He’ll say, “I’m from 2199, how about you?” and I’ll say “I’m from… RIGHT NOW! Quick, get a net!”

Turns out Brian was good friends with another comic I admired greatly, Mitch Hedberg!