So what? About the relevance of (my) science
- Oct 12, 2022
- 4 min read
Updated: Dec 14, 2023
A rejection near miss raises a fundamental question

II share the story of a publication that nearly got rejected. We combined one of the world’s highest-resolution Fourier transform infrared (FTIR) spectrometers (formerly THE world-record high-resolution FTIR) with the Swiss Light Source, a synchrotron radiation source. The goal was to measure the low-energy spectrum of a quantum magnet. I started my PhD in a new field with this project. I had a steep learning curve with numerous technical issues to solve. It was laborious work too. However, it finally paid off. I acquired data which no one else could take because our instrumental combination was so unique.
We summarized our measurements and analysis into a publication. I expected smooth sailing. But when we submitted it was anything but: Our work was almost shot down by the reviewers and we had to go through three rounds of revisions. The major reason was another group had recently performed measurements that had significant overlap with our results. They used a different technique with less exquisite instrumentation. In principle, this allowed similar conclusions based on a technically easier accessible wavelength of light. Physically, the reviewers argued, this was completely equivalent and thus not justifying a publication (in the respective journal). But this was not the sole criticism. We studied a model material that had been extensively studied in the past and present because it still conceals unresolved mysteries. Because of the significant body of existing work in combination with these recently published results, the reviewers viewed the novelty of our work low – unless our analysis that went further than previous work, proved to be useful. Parts of our data was indeed new and all of it higher at resolution than any previous work (who has a synchrotron radiation source and a 6-meter FTIR in their backyard? ;-) ). We were given the chance for major revisions. However, the most hurtful aspect was the request to demonstrate the usefulness of our work. It wasn't about the correctness or quality of our work, but about its usefulness. This is close to asking the famous “So what?” question.
There are multiple aspects to this question. Scientists rarely talk about the emotional one. A publication is a whole lot of hard work and a great deal of personal passion. There is much at stake to be possibly rejected during a review process. However, publications are professional works that leave no space for emotional statements. Some steps are utterly frustrating, take many attempts, long hours, and late nights. Some results presented in the publication were achieved by working through many technical and circumstantial issues. Describing those does not add to the scientific value and therefore they are not mentioned. Receiving criticism without acknowledgments for all of this is never easy. But criticism and rejection are an integral part of academia. Most scientists have a longer list of rejected than accepted proposals and most of them also had papers rejected for various reasons. One can learn to deal with it, even though it's not easy at all – especially for early career researchers whose careers critically depend on it. But that’s a different story ;-). A small suggestion that might make a difference: Reviewers could explicitly acknowledge the hard work and hidden challenges of every work, even though they are not mentioned in the paper. It helps digesting criticism that should be professional and never personal. We should explicitly acknowledge the authors' efforts, frustrations and hopes behind their work.
Getting the usefulness and significance of your work questioned is especially tough. It is also not constructive feedback. It is neither an idea of improvement nor a criticism for something that could be changed or improved. It goes deeper. It is like saying: “Your results and analysis are correct, but we don't care - unless you can find a good reason why we should”. Even though science commits itself to objectivity – which was clearly fulfilled in our case – subjective aspects play a decisive role when it comes to publication. While scientific integrity and validity are necessary conditions for publication, relevance is the sufficient one.
I could delve into the discussion of relevance, the publishing business and bibliometrics, impact factors, career decisions, false incentives, etc. Luckily, this is being actively and critically discussed by other colleagues. I focus on two aspects instead. First, I see science as a public servant. Its goal should be to expand the borders of our knowledge. Ultimately this should serve the greater good, while I acknowledge temporary (sometimes century-long) meandering towards more truth. Second, David Deutsch pointed out that no theory can predict the content of its own successor. It is a priori unknown which future steps will build on a current work. Therefore it is also impossible to confidently assess the future impact of a publication. Consequently, every solid scientific work is a priori worth of being published and asking the “so what?” question for scientific work is fundamentally questionable.
Somebody pointed out, that every scientist should be capable of explaining why they are interested in what they research. I agree. Even though we did so, the reviewers’ concerns were not allayed, and we were asked to explicitly demonstrate the usefulness of our work. We then showed that some measured quantities that we predicted from our parameters were sixty-fold more precise than previous work. However, this was still not unambiguously received, but paved the way to a final agreement with the editor in the third round.
In our case, the “so what?” question lead to a long, frustrating, and work-intense detour. I think this is a general signature if relevance is questioned. Because a “good motivation” and relevance contain similar degrees of subjectivity, they cannot be objectively discussed (c.f. Deutsch’s argument) and doing so does not lead to fruitful outcomes. One could argue that our lengthy review process improved our paper. Yes, it did, but only minimally and at a disproportionate cost. You can do the math how long it took from the beginning of the project to the end;-)
I have not spent enough time to find a solution to resolve issues with current publishing. But preprint servers and peer-reviewed open access feel like a step in the right direction, as well as making the reviewing process publicly available. Critical reflection and discussion of the publishing process are an important step towards improvement. Finally, because science is an endeavor of the science community, I encourage to share experiences and stories – like I just did.
Comments