Thursday, February 4, 2016

We tend to be harshest those we love...

Last year a paper was published investigating links between natural transformation and type VI secretion mediated killing, and I cobbled together a blog post with some select thoughts about this paper (here). It's recently come to my attention (should have written this earlier, it's my fault that I dropped the ball) that that particular post could be viewed in a couple of lights depending on your the context. I want to make it completely 100% crystal clear, that it is never my intention in writing in this space to include ad hominems in posts (I'm not Dan Grauer...)*. I write quickly and don't ever really sit on posts, but really I'm writing here because I love what I do and I care for certain topics in science. I'd like to see these topics understood as clearly as possible. For those that "know me", I really enjoy critical discussions about science and sometimes my words get ahead of my inner filter. Suffice it to say that I've made a conscious decision to try and not use this space to trash people and papers personally, but to be critical about science involving topics that are near and dear to my heart. Sometimes when I do this my inner New Yorker comes out, but know that my intention is not to critique the person but to focus on the science. If I do this, let me know and I'll directly tackle/ammend/try to fix the issue in public.

Figure 1. An example of me responding a little quickly in public to a recently published paper

This being said, I'm writing this post because another paper was published recently that linked together natural transformation and microbe-microbe killing. Honestly, I haven't had the chance to read the paper yet because I was sitting in airplanes all day yesterday. I'm guessing that I would probably have similar thoughts as I did about the type VI secretion paper last year. These thoughts come out in public sometimes:

Figure 2. An example of me responding to a recently published paper after sitting on airplanes all day and without reading the paper

All right, so what's my beef with these studies? Let me be completely clear, I respect the authors and inevitably I have no problem with the experimental design or the actual reported science. In the type VI secretion case, and probably with this new paper, I have absolutely no problem with the experimental design or with the genetics. I don't think the papers are wrong science-wise. I'm writing about these papers because I spent the better part of 5 years in graduate school huddled in the fetal position thinking about the evolutionary effects of natural transformation in bacterial populations. My problems with a lot of these papers are usually directed solely at the evolutionary interpretations and spin within discussion sections and press releases.

There's a historical legacy that surrounds researchers of natural transformation in bacteria, where there are a couple of entrenched camps that tend to argue past each other. These fights usually flare up around disagreements that conflate questions about original evolutionary benefits of natural transformation and benefits of natural transformation that are measurable today (after these systems originally evolved). After many years of thinking about this, I'm actually agnostic when it comes to the original evolutionary scenarios for natural transformation. Rosie Redfield is one hell of a thinker and I defer to her about such things (so I guess that firmly places me within one camp). I tend to be more interested in wondering about how strong the selective forces on natural transformation are within present day bacterial populations and on trying to figure out realistic parameters that could affect our evolutionary interpretation of gene exchange in bacteria.

Like I said above, I think the genetics and molecular biology within these papers are tip top and have absolutely no quarrel with those. I get caught up when the discussions start to extrapolate from  the controlled conditions of the lab environment into natural populations. Natural transformation certainly leads to gene exchange in natural populations of bacteria. However, suggesting that pathways are linked in regulation because of evolutionary benefits of natural transformation is a leap of faith that no paper out there has been able to tackle as of yet. What leads to the disconnect? There is a long standing tradition across many, many, many papers that describe evolutionary "just so" stories whereby we witness results in the lab or under certain conditions and think/assume that natural selection must act that way across many different conditions or environments. My comments about "hand waving" are usually directed at such extrapolations.

This is getting long, so I'll save the nitty gritty for another time. However, long story short, these extrapolations hinge on critical parameters of these experiments being similar in the lab and under natural conditions. There are no natural populations of bacteria for which we have realistic estimates of things like A) the DNA pool available for natural transformation B) natural selection pressures over space and time within and between bacterial populations that are exchanging DNA C) the repeatability and direction of these selection pressures D) how often cells encounter other cells that they can kill in nature E) having killed these cells in nature, how often these cells take up DNA F) evolutionary costs of natural transformation in nature G) I'm missing something because it's early in the morning but there are other parameters. When one sits down to write mathematical models that account for all of these above parameters, it ends up being REALLY difficult to find parameter space whereby natural transformation is GENERALLY beneficial. That's not to say that gene exchange doesn't matter within natural populations (as it certainly does) but it's hard to find situations where there are clear results where natural transformation is beneficial even a majority of the time. Under laboratory experiments like the ones in the type VI secretion paper (and probably the bacteriocin paper, again, haven't read yet) all of these parameters are actually controlled for pretty cleanly:

A) the DNA pool is controlled by the experimenters so that there is no contaminating DNA from other strains/species that could compete with genes of interest for uptake

B) Natural selection is really strong because these experiments are typically selecting for antibiotic resistance where cells pick up the relevant DNA survive and those that don't die. The same would be true if we experiments were set up to investigate phage predation, etc...

C) Typically in these lab experiments, there is only one direction for selection to act and the environment doesn't change over time (i.e. there is only one antibiotic that the cells need to become resistant to, and therefore one locus that they need to pick up through natural transformation)

D) lab experiments are usually biased so that cells are encountering cells that they can kill at pretty much optimal frequencies (50/50).

E) lab experiments are done under conditions whereby cells are highly competent for natural transformation

F) there are few costs for natural transformation systems in these lab experiments because the experiments themselves usually occur in relatively cush situations for bacteria (media containing abundant nutrients, controlled temperatures, etc...) and only take place under limited amounts of time. In fact, for most bacteria, if you passage them under lab conditions for extended periods of time they usually lower competence levels (which suggests an evolutionary cost). Like I said though, the lab experiments are only performed over limited amounts of passage time.

So to sum this all together, I apologize for any perceived slights. That's not my intention (yeah, I know get out the bingo cards). If you feel I've been too personal, please let me know and I'll try and fix anyway I can. There are many great groups focused on understanding natural transformation in bacteria and I respect much of their work. I've just spent way too much time worrying about evolutionary scenarios that usually pop up in discussion sections of these papers without (what I perceive) is firm grounding within evolutionary biology. These papers usually aren't usually set up to be direct tests of evolutionary theory, but it's very easy to write about how we think evolution should work. These papers usually end up being very good at describing the if natural transformation works for gene exchange under certain scenarios rather than how it's actually happening in nature. That's completely OK, just be careful about extrapolating.

*c'mon, that one was way too easy

Tuesday, February 2, 2016

How to (not) write a microbiome grant, part II. A deeper dive on preliminary data

As in Part I, a few quick notes that come to mind as I'm reviewing microbiome grants....

One of the biggest challenges and frustrations with grant writing is knowing just how much and what type preliminary data to include and how much detail on methods to provide. Within the context of the grant, preliminary data has a couple of different jobs. First, it's there to convince the reviewers that you can actually perform the type of experiments and analyses that you are proposing. Second, it's there to justify why the proposed experiments are interesting or necessary. There's certainly no magical formula, but I think there are a few things to keep in mind to when struggling over these two variables. (DISCLAIMER: just one person's opinion)

1. The amount of preliminary data required changes throughout the course of your career.

Don't kill the messenger, but track records matter. It's just the way it is. Early career researchers need to include more detail and need to justify their proposed experiments moreso than established researchers. If I'm reading a grant and I see that the PI has published (even as a preprint, because I can go and read the methods if there is a question in my mind) these kinds of analyses before, it's much easier to believe they'll be successful performing the proposed analyses. All else equal, that inherently gives established researches a leg up given page limits.

2. You must include enough detail to convince me that you know what you're talking about with the analyses.

If I'm reading a grant that proposes types of experiments that I'm familiar with, I probably have a decent idea of the associated pitfalls and critical variables. If you've done the experiments or understand how to do them well enough to carry them out, you should also have an idea of the critical points to include in your methods and analyses. It's very likely that, even if you don't have experience with specific protocols, that you'll know someone that whatever you can to understand the ins and outs of the proposed experiments and write enough detail to cover the critical points. Assume that at least one reviewer is going to be familiar with the experimental protocols and include enough information to convince this reviewer you know what you're talking about. Assume that other reviewers may not understand the protocols and include enough basic information to give them an idea of what you're talking about.

3. The type of preliminary data required changes throughout the course of your career.

If you have a proven track record in the field, or if you've hit both of the above points in your grant, the preliminary data within your proposal should provide just enough smoke to convince the reviewer that there's a fire somewhere (metaphorical of course). It's very easy to propose "fishing expedition" experiments, one's where you are going to make a lot of observations and some magical result is going to come from combining together all this data.

When I was started as a PI, I kept proposing a few different RNAseq experiments that I thought would be very interesting and insightful. Inevitably, I'd get the reviews back and I'd get dinged for not having a hypothesis. "Pssssshhhhh" I'd say to myself as I gripped my stress relief ball, "The hypothesis is that gene expression WILL change!". With a bit more perspective gained from grant panel experience, I understand exactly now what the "fishing expedition" critique means. It's a combination of the psychology of having to review a bunch of related grants at a single time coupled with the reality that you have to bin grants into different piles as a reviewer.

Here's an example with microbiomes (in the style of Law and Order, this very example may be based on real events that are happening at this exact moment). Say a researcher has 10, 15-page microbiome grants to review before the panel meeting. A large percentage of these grants are interested in measuring microbiome dynamics over time, space, and across individuals. The methods and proposed analyses are usually very similar across a large swath of these grants. The only ways left to bin as a reviewer are by host species and whether there's enough smoke to think there may be a fire. If you can't make the case that your study system is different than other ones, chances are that your grant is going to get lumped in with those proposing similar methods and placed in the "others" pile unless someone else on the panel makes a good case during discussion.

Preliminary data is your way to make the grant stand out. If you are proposing that individual to individual variation matters, or that changes in microbiome dynamics over time matter, it's easy enough to get a few samples and sequence 16s. It doesn't have to be a full study, it just has to be enough to show that there is some signal that's interesting enough to follow up on. There's a surprising lack of pilot experiments in a lot of these microbiome grants (IMO), and the only thing I can think of is that it's hard to find sequencing centers that can process a handful of 16s samples relatively cheaply. I again assume that this is because you typically you need a certain threshold number of samples for a MiSeq run (vs. Sanger sequencing where you can perform just one reaction). One way to get around this is to find others that are interested in generating the same kind of data and pool resources together to pull together a whole MiSeq run. There seem to be a couple of places that could facilitate finding others to pool with (like GenoHub).

My null hypothesis as a reviewer is that microbiome dynamics are going to be the same in your system as they are in well studied systems. Use this preliminary data and pilot experiments to disprove my reviewer null hypothesis. Are there differences abundances or frequences for taxa that are important in other systems? For instance, if you're proposing a phyllosphere microbiome study, off the top of my head I can imagine a top five for the taxa you should find in high abundance. Is your system different (If you can't answer that, there's some reading you should do).  If you sequence a couple of plants, do the larger plants have differences in microbiomes that you can follow up on? Is there something unique about your microbiome of interest compared to others (i.e. the rice rhizosphere apparently has some archea). If there are differences between these experiments and previous ones from other systems, think about hypotheses to explain why these differences and build your grant off of that. "Sequence everything and sort out the important trends later" doesn't work when every grant is proposing to do the same thing.

4. "But I don't have any preliminary data to include"

Yes you do. It may not be your own, but there are enough public datasets for you to reanalyze other's work (to at least show that you can do the analyses and give yourself some sort of track record). It doesn't even have to focus on the system you are proposing to work in so long as it moves your narrative forward (see Points 1 and 2 above).

<slight update to point 4> There's a flipside to this. If you're proposing experiments similar to others that have been published before in the same system, don't just cite the previous papers. Give your reviewers a context for why your proposed study is going to be different than previously published studies.

Thursday, January 28, 2016

How to (not) write a microbiome grant, Part I

I'm involved in something that I shouldn't talk much about, but suffice it to say I'm smack dab in the middle of evaluating multiple different microbiome grants. As such I'm the exact target audience you should be aiming to reach with your microbiome grant, literally, the exact target audience. There are a few grantsmanship things at the forefront of my reviewing mind right now that I'd like to get down in written form with the hopes of helping people out in the future (myself included). I'm especially wary going forward because there is currently a push to make microbiome studies the next BRAINI and we as a community are going to do ourselves a huge disservice if we get this wrong. The last thing I want to see is a huge movement for government funding within an area I care deeply about, only to have it not really come close to living up to the hype. I'll definitely have more to say on this as I digest some more, but here are a few things to keep you going:

1. Think about your hypothesis when designing experiments and keep this hypothesis in the forefront of your thoughts. 

Microbiome constituency is going to change over time at some level, it's going to change in some way with pretty much every manipulation you can think of. It is soooooo tempting to write experiments that focus on measuring this change over time, measuring how individuals differ, or that measure the variation associated with treatment X. Maybe one community changes at different rates than others? Maybe there is something magical and emergent that happens when you put all of this community information together?

When you are writing your grant, you are going to be couching the effect of the microbiome in terms of something. The microbiome is important for human/plant health, the microbiome affects geochemical cycling, the microbiome affects evolution of species, etc.....The problem I'm repeatedly seeing with grants is that the whole project is built with the idea that the microbiome will have an effect on X, and measuring how treatment Y affects the microbiome is important for understanding X, but to me at least it seems a lot of people forget to directly link treatment Y with its effect on X. If the treating the phyllosphere microbiome with jasmonic acid is going to change its constituency or dynamics, and jasmonic acid is therefore predicted to affect "plant health", please try to include measurements of "plant health" within your treatments. Keep the whole hypothesis in mind and don't just measure how treatment Y will change the microbiome and then assume that this change is going to impact X. Directly measure the impact of Y on X in the context of the microbiome.

2. It is very tempting to want to use the latest technology to measure "system level" effects of the microbiome. Proceed at your own risk, and with enough preliminary data to make me believe that you can adequately carry out the experiments. 

A couple of years ago, on a completely different panel, every grant seemingly included RNAseq. Now every grant is including metabolomics and metatranscriptomics. These technologies are awesomely powerful, and they will truly revolutionize some areas of science. If you are writing a grant, however, don't just say you will evaluate the microbiome with "metatranscriptomics". I want to see data for how many reads you might expect in a given environment (pilot experiments work wonders). I want to know that you are proposing to sequence enough depth to actually have a reasonable chance at seeing differences. Every system is different, and just citing papers that it's possible doesn't do the trick. This is especially challenging when studying microbiome communities within a host. Much of the metatranscriptomics work right now is being done in environmental communities, and a lot of the papers/technologies are being developed with these kinds of studies in mind. Any time you include a eukaryotic host in microbiome studies, you are going to get A LOT of eukaryotic RNA in your metatranscriptomes. Sure, pulling down with PolyA might clean up some, but for a lot of plant systems the chloroplast RNA isn't polyadenylated and is present at high frequencies. For 16s based studies you can design PNA blockers to limit the amount of eukaryotic contamination that comes through, but this doesn't work at a metatranscriptomic level yet. Sure, you can just throw everything onto a few HiSeq lanes and only use bacterial RNA reads, but as a reviewer I want to see enough information to convince me that using the "sequence everything and cull what you don't want approach" or maybe "the overkill-ome" is going to work. Tell me the fraction of reads that are host vs. microbiome, even if you only have this information from a small scale pilot study.

3. I don't ever want to see this in your grant:

Figure 1. Bad Hairball Plot. 
mage from:

That is a plot of some random network I found with a Google search. Microbiome studies tend to sample many different taxa under different situations over time. It's tempting to put all of this interaction data together in a plot such as the one above, so that in your grant you can demonstrate that you do "systems biology". THERE IS NOTHING USEFUL VISUALLY IN THESE KIND OF PLOTS. In many cases, the nodes aren't even labelled. The only thing I can assume that this plot is trying to show is that you can do "systems biology". Grant space is so precious and limited, why waste it on a figure that doesn't relay any information at all?

If you do find interesting interactions, feel free to make a small figure including just a few nodes (labelled of course) that shows these interactions and explains what this interaction visually means. A hairball plot like the one above serves absolutely no purpose for the reviewer of a grant, and actually annoys me. Don't annoy the reviewer.

End of rant for now:)

Monday, January 18, 2016

Realized and Fundamental Niches in Academia

Two things I read last week motivated me to sit down and post some thoughts. Last night I found some time to read Jesse Shapiro's new preprint (because, for once, my kiddo went to sleep early and I still had some energy at the end of the day), which focused on using metagenomic data to tease apart how recombination and selective sweeps affect genetic diversity within bacterial populations over time. It provides quite a good summary of recent research into this topic and I found myself binge reading a bunch of newer papers late on a Sunday night. It's surely not for everyone, but I am and have always been fascinated by this research topic. When I finished grad school, this was kind of what I thought I'd be researching over the course of my career.

I'm a few months from submitting my tenure packet, and have been working for the better part of the last 5 years to sculpt that document. Over that time I've been lucky to have really good people working in my lab, and we've been lucky enough to be decently successful in terms of funding and manuscripts. However, even though all of the work we're doing is interesting and exciting, it has ostensibly nothing to do with recombination in bacterial populations despite my intrinsic interests. I wouldn't say that I'm sad about this, but I definitely have feelings that border on regret.

Couple these thoughts with a great blog post I read earlier in the week from Proflikesubstance describing "Tenure Funk". Since I don't have tenure, it's a bit premature for me to say anything about the feelings after accomplishing that goal, however I think I'm on a trajectory towards something that resembles a funk. As a PI you work so hard to find ways to carry out experiments, pay for research, and take care of your people. You have to do what you can to make sure that the lab survives. When I started my lab, a friend (also a PI) described having to "sell out" in order to find ways to fund research. There are no doubt lucky people out there that can do exactly the research they want and find ways to pay for it, but I think there are a lot of us out there that find that our "lanes" diverge from where we thought they'd go. You have to do what you can to survive, and this often means downshifting your energy away from experiments you love to pursue the fundable. Aside from $, there are also institutional pressures that direct your research. I'm in a Plant Science department in a School of Agriculture. I feel compelled to work on systems and questions that my whole department and school can easily relate to. Sure, there are bits and pieces of research that would allow me to investigate recombination in bacterial populations in agricultural settings, but I find these projects more difficult to sell across the campus/school than applied projects.

In ecology, researchers talk about fundamental and realized niches. A fundamental niche is the total space/role that an organism can theoretically fill in nature if not affected by outside forces. The realized niche is the space that an organism actually fills in nature. As we progress through our careers, we all find out what our realized academic niche is (this may be what type of college you work at, what type of research you do, what industry you work in, etc...). This often differs from our fundamental academic niche because outside forces affect the direction of our lives. At certain points in our careers (like tenure time) there are benchmarks that force us to sit down and evaluate how we're doing. It's at these benchmark times that the divergence between our realized and fundamental academic niches becomes apparent. I can see pretty clearly now how these realizations could lead to "tenure funk" or related funks across careers. Can't say that I know how to fix it, although the idea of taking sabbaticals to think and develop projects is definitely appealing. I also have a feeling that realization of divergence between what I'm doing and what I'd do given unlimited funds is actually quite good in the end.

It just hit me that, in my 5th year review meeting, my department head made exactly this point. Tenure is a great time to evaluate the course of your career and what you'd change. Now that I've established a couple of interesting (and fundable so far, fingers crossed) systems, I can begin to tweak these to ask questions that align better with my intrinsic interest in bacterial evolution. When you start your lab everything goes so fast and time management is so difficult that you have to focus on only the most important things.  Now that I'm a few years in, I have a better sense of how to carry out smaller exploratory projects given time constraints. Being a PI is the greatest job in the world (IMO, for me) even though it's stressful as hell and bathes you in bad news most of the time. Truth is that it takes a few years to figure out how to navigate the system where the research you truly want to do may not be fundable. The research is still quite possible, it just takes some time and perspective to see how to get the paths to converge again.

Monday, September 28, 2015

The grass may look greener

Second post as I sit and think about my first five years running a lab...

Inevitably, there is going to be something about your lab/research situation that you're not happy with. If these things are within your control, great! You can hopefully fix them and move on. Other institutional things will be out of your immediate control or above your paygrade. With this second set, you can either look to find jobs elsewhere or find creative ways to make your current situation improve.

One of the most difficult things for me to deal with as a PI at Arizona is that there is no 'real' central Microbiology program. There are numerous smart and talented researchers across the campus who are microbiologists, but it's not as cohesive a unit as other places I've been. Part of this is historical inertia. Part of this is stuff that's over my paygrade. Part of this is that many of us have obligations to other programs on campus and simply can't devote the time that we would like to fostering those relationships. There are only 24 hours in the day, and my loyalty (for lack of a better word) has to be to Plant Sciences first and foremost because that is my home department.

For whatever reason, and I may definitely be the person to blame for this, I have felt a bit lonely on campus researchwise. Everyone else is doing great things, but I've never truly felt that other research interests on campus significantly overlapped with my own interests in evolutionary microbiology. Sure, I can bounce grant/experiment ideas off of people and receive very useful feedback, but I haven't been able to find a community of researchers on campus to discuss topics like "adaptation", "pleiotropy", "horizontal gene transfer" general ways. I really enjoy lofty discussions about where the field of experimental evolution is going, but I haven't met anyone else on campus to grab beers with and talk shop. If you're out there, please come find me! On top of all this, many of the microbiology folks associated with the EEB department here have up and left in the last few years.

I didn't appreciate it until recently, but our own research careers are hugely shaped by the environment we are in. My last post was about how my research trajectory changed in the first five years of my lab. I can definitely say that those changes were precipitated by what kinds of scientific interactions were available to me on campus. Lately I've been wondering though, how would my own experiments or grants have changed if there were a couple of more labs on campus generally involved in evolutionary micro (or if I knew about them)? Would I have had a bunch of different collaborations than I do right now? My research ideas would no doubt be shaped again if I changed Universities and joined more of an EEB department, do I want that to happen now?

Without going into details, I think that the problem described above is institutional. Without hiring numerous new PIs, which has been a bit difficult at state universities since 2008, there are only two options to remedy my intellectual withdrawl. I could change institutions, which has it's own set of issues, or I could find a way to get the interactions I needed off campus.

A couple of years ago I had to defend my time spent on Twitter to my department head. What I said then, and still do, is that Twitter has been an intellectual lifesaver in addition to any other tangible benefits. I can go there and find papers that I wouldn't have the time to search out otherwise. I can interact with people whose intellectual interests better align with my own, and carry out great (to the extent that any character limited discussion can be) discussions with people about new results or research trajectories. I've tried to get better grant feedback by posting these and asking for comments, which hasn't worked quite like I'd have hoped but I think was still worthwhile. I connected with a couple of folks who were willing to read over other versions of grants and offer really constructive critiques.

Ecology/evolutionary microbiology seminars on campus are few and far between (for instance, EEB has had some micro people in to give seminars). Sometimes I've been able to invite people to give Plant Sciences seminars, but you have to fill a certain niche for me to feel OK doing that. Given this context, microseminar has been another intellectual lifesaver and has filled one of my on campus blindspots. Along these same lines, I've participated in Google hangout journal clubs and am thinking about incorporating those kinds of things into my own lab meetings. It's fun to actually have the person who wrote the paper get in on the discussion, and with the magic of Youtube these discussions are archived for everyone to see. I'm going to try and work both of these activities into my lab meetings next term, because that way the time is already scheduled.

Long story short, there is no perfect situation as far as I can tell, but there may never have been. Some research environments may foster new discoveries (i.e. Bell labs) but there were all sorts of downsides and infighting that happened there too. There are tons of letters (actual letters!) from back in the day between researchers talking about their ideas to each other and which provide a bit of coloring for how experiments are described in textbooks. I don't think I've ever written a letter to another researcher with a pen and paper, however, I've been able to find ways to placate some of my intellectual cravings through social media. FOIA requests aside, future historians are going to have a lot of archived tweets/blog posts/videos to sift through to understand how scientific revolutions happened in the 2000s. I think loneliness happens to everyone in this job at some point or another. At least for me, time spent interacting online has helped to quell these feelings a bit and because of that it's time well spent.

Tuesday, September 22, 2015

The five year plan redux

Rounding into about my fifth year on the job as a PI, I've started to look back and think about how I made it to this point. This will probably be a series of posts as the ideas jump into my head, but today I've been wondering about how a lab's research focus changes.

Five years ago I sat down and thought about a five year plan for my lab. I was just coming off of a postdoc using comparative genomics to study virulence evolution in the plant pathogen Pseudomonas syringae. Given that I was (and still am) in a Plant Sciences department, I thought that it would be a good idea to continue studying virulence in P. syringae, and I knew that there were some interesting/safe results left to mine from my postdoc data. In parallel, I was thinking that I wanted to get back into studying experimental evolution of microbial populations. I actually chose the lab for my postdoc in order to get experience with Pseudomonas with the hope of eventually setting up such systems. For my "riskier" projects I wanted to use experimental evolution to look at the effects of horizontal gene transfer on adaptation.

For a couple or three years I followed my plan. I've been able to publish a few papers on virulence in P. syringae. I've got my experimental evolution system up and running and have published some of the necessary background work. There remain a bunch of different paths that I can follow for either of those two projects that I am exciting to try and follow up on.

The amazing thing to me though is that I'm not currently funded to do any of that. I've written numerous grants (20?) for these projects across multiple agencies, but they just haven't been successful. A few of these grants came REEEAAAALLLYYYYY close to funding, but just didn't make the cut. Getting any of these grants would have been great, but I think my research program is actually stronger because of those failures. Sure, every rejection email sucks, but I was constantly evaluating and reevaluating research directions. For the plant pathogen work, there was a lot of competition and my grants (even though they were solid I think) just didn't stand out because there were many other labs doing approximately similar things. The experimental evolution work just didn't seem to hit the right chord to the right people, again and again and again.

I've been lucky to get a handful of grants recently, but none of these projects was on my radar five years ago. One of the funded projects started out as a random email question between Betsy Arnold and I in about year 2 of my lab and has blossomed incredibly since then. It's one of those exciting projects where we find a new result every week or so, yet every thing about the system remains pretty black-boxish. Another newly funded project started as an observation by my postdoc Kevin Hockett around year 3 of the lab. He started out playing around with diverse strains of P. syringae, seeing how these strains interacted with one another. We kept pushing the genetics of the system because nothing published could explain the results. Turns out we stumbled into a really cool evolutionary story.

The point of this whole post is that I had a plan, but the plan necessarily changed. Since grad school, I've imagined how my research career would look. Never did I think I'd end up in a Plant Sciences department (there are pluses and minuses, but that's a post for another time). The questions I thought my lab would be focusing on have fallen to the wayside. I'm still quite interested in them and have a variety of undergrads plowing ahead, but they aren't on the forefront anymore. The projects that have been successfully funded came together after I spent a couple of years focused on completely different topics. I'm an N of 1, and I have no idea if my story is shared by other researchers, but there are so many posts about how to be a PI that I figure I'd share this data point. I have no clue what the future truly holds, but I'm just going to keep being curious about the world because it's been good to me so far.

Friday, July 24, 2015

A healthy dose of skepticism and the need for editors

As some of you know, I've been engaged in an interesting dialogue as a reviewer for Frontiers recently. This got me thinking about the role of editors in the process of publication, but also about how my own brain interprets experimental data. I was originally going to write a couple of posts, but I think they work together so now you just get a singular post that's slightly longer.

 Long story short, currently, I disagree with the way that the authors have analyzed their data and am waiting to actually "endorse" the publication at a Frontiers journal. If you snoop around in a couple of months I'm guessing that you'll be able to figure out what I'm talking about because at Frontiers the reviewers names are listed openly on the final PDF. This whole process has led me to rethink the way that Frontiers actually performs review (which takes place in an interactive forum where authors respond directly to comments from reviewers). I love the idea of open, non-anonymous review and am strongly in favor of making public a record of review for each paper. For reasons I'll elaborate on below, I think this system is slightly flawed.

Maybe I've just grown cynical over the years, but the first thing I do when I get awesome new data is to question how I screwed up. Was everything randomized? Did the strains get contaminated? Etc, etc...Ideally all of these questions are answered by experimental controls, but I'm good at thinking of extravagant and elaborate ways in which I'm wrong. Nature is often quite good at this too I've found, although that's the fun of biology (after a period of cursing the sky). Thanks in a large part to this  self-skepticism, I'm always thinking about the next ways to adequately control for experiments which leads me to wait to pull the trigger on submitting publications. My grad school and PD advisors helped to reign in these skeptical tendencies and slowroll of manuscript submissions just a bit by pointing out that nothing is ever perfect. The voices are always still there though.

These same tendencies act when I'm reviewing other papers. Sometimes things are easy to believe just by comparing summary stats to the reported data, but other times I'd like to see the primary data and dig my hands personally into the underlying statistical model/assumptions until I truly believe it. In many cases I have to actually ask to see this primary data, which is not great, but at least with anonymity I don't worry about directly questioning the author's abilities. I mean, inherently if you are asking for primary data because the stats seem wonky then you're implicitly questioning other people's abilities. When my name is not going to be known I don't worry as much about the social ramifications of it all and I sleep better at night.

I am way too over-critical of my own experiments. A little bit of skepticism is healthy, but too much self-skepticism as a scientist paralyzes your career. Even as a reviewer I worry about being over-critical and asking for tedious and minuscule changes that might not ultimately matter. When you are knee deep into reviewing a paper it's easy to lose sight of the bigger picture. This is where the editor comes in. Each time we review a paper, we make a list of critical and less than critical things that need to be "fixed" before publication. Oftentimes the editors will read these list from every reviewer and distill down the absolute requirements. Editors often have their own impression of what makes a publishable unit (that's for another post though, suffice it to say that's why direct track 2 submission to PNAS no longer exists). What I've come to think is that editors are absolutely required in the current publishing process. Reviewers and authors are on about the same level in the dynamic, but the editor inherently has an overriding sense of authority in the whole process. They can take reviewers comments and immediately disregard the ones that aren't critical. They can emphasize to authors exactly everything that needs to be done. The authority is key because both reviewers and authors are deferential to it. As a reviewer I'm not worried about asking too small a question because 1) everything I write in the review is important to me and 2) I know that the good editors will know when I'm being too specific are nit-picky.

With this Frontiers article I've had to respond to the authors that "I'd like to see the primary data". Having received many reviews in my career, I know exactly how this comment will be received. When it comes from a reviewer directly it seems nit-picky and maybe even a bit of a personal affront. If the editor agrees, there is a bit more weight to the comment. It felt weird having to directly comment to the authors that I wanted to see their data. They're a good lab and I worry that their impression of me (since they'll know my name after it's published) will change for the worse. These are things you can't control, but that's how it goes.

For all of you out there who have papers I'll review in the future, know that I'm even harder on myself. I'd like to think self-skepticism is part of what makes me good at my job though.

Disqus for