As some of you know, I've been engaged in an interesting dialogue as a reviewer for Frontiers recently. This got me thinking about the role of editors in the process of publication, but also about how my own brain interprets experimental data. I was originally going to write a couple of posts, but I think they work together so now you just get a singular post that's slightly longer.
Long story short, currently, I disagree with the way that the authors have analyzed their data and am waiting to actually "endorse" the publication at a Frontiers journal. If you snoop around in a couple of months I'm guessing that you'll be able to figure out what I'm talking about because at Frontiers the reviewers names are listed openly on the final PDF. This whole process has led me to rethink the way that Frontiers actually performs review (which takes place in an interactive forum where authors respond directly to comments from reviewers). I love the idea of open, non-anonymous review and am strongly in favor of making public a record of review for each paper. For reasons I'll elaborate on below, I think this system is slightly flawed.
Maybe I've just grown cynical over the years, but the first thing I do when I get awesome new data is to question how I screwed up. Was everything randomized? Did the strains get contaminated? Etc, etc...Ideally all of these questions are answered by experimental controls, but I'm good at thinking of extravagant and elaborate ways in which I'm wrong. Nature is often quite good at this too I've found, although that's the fun of biology (after a period of cursing the sky). Thanks in a large part to this self-skepticism, I'm always thinking about the next ways to adequately control for experiments which leads me to wait to pull the trigger on submitting publications. My grad school and PD advisors helped to reign in these skeptical tendencies and slowroll of manuscript submissions just a bit by pointing out that nothing is ever perfect. The voices are always still there though.
These same tendencies act when I'm reviewing other papers. Sometimes things are easy to believe just by comparing summary stats to the reported data, but other times I'd like to see the primary data and dig my hands personally into the underlying statistical model/assumptions until I truly believe it. In many cases I have to actually ask to see this primary data, which is not great, but at least with anonymity I don't worry about directly questioning the author's abilities. I mean, inherently if you are asking for primary data because the stats seem wonky then you're implicitly questioning other people's abilities. When my name is not going to be known I don't worry as much about the social ramifications of it all and I sleep better at night.
I am way too over-critical of my own experiments. A little bit of skepticism is healthy, but too much self-skepticism as a scientist paralyzes your career. Even as a reviewer I worry about being over-critical and asking for tedious and minuscule changes that might not ultimately matter. When you are knee deep into reviewing a paper it's easy to lose sight of the bigger picture. This is where the editor comes in. Each time we review a paper, we make a list of critical and less than critical things that need to be "fixed" before publication. Oftentimes the editors will read these list from every reviewer and distill down the absolute requirements. Editors often have their own impression of what makes a publishable unit (that's for another post though, suffice it to say that's why direct track 2 submission to PNAS no longer exists). What I've come to think is that editors are absolutely required in the current publishing process. Reviewers and authors are on about the same level in the dynamic, but the editor inherently has an overriding sense of authority in the whole process. They can take reviewers comments and immediately disregard the ones that aren't critical. They can emphasize to authors exactly everything that needs to be done. The authority is key because both reviewers and authors are deferential to it. As a reviewer I'm not worried about asking too small a question because 1) everything I write in the review is important to me and 2) I know that the good editors will know when I'm being too specific are nit-picky.
With this Frontiers article I've had to respond to the authors that "I'd like to see the primary data". Having received many reviews in my career, I know exactly how this comment will be received. When it comes from a reviewer directly it seems nit-picky and maybe even a bit of a personal affront. If the editor agrees, there is a bit more weight to the comment. It felt weird having to directly comment to the authors that I wanted to see their data. They're a good lab and I worry that their impression of me (since they'll know my name after it's published) will change for the worse. These are things you can't control, but that's how it goes.
For all of you out there who have papers I'll review in the future, know that I'm even harder on myself. I'd like to think self-skepticism is part of what makes me good at my job though.