Journal rejection and low-stakes feedback

Daniel Devine
5 min readOct 16, 2018

When learning about the publication process and inevitable experience of rejection, early career researchers (ECRs) will often be told that ‘rejection is part of the job’ and ‘even top professors get rejected’ — a recent journal article describes this well. The advice is usually to plough on, have faith in yourself, etc. This is true, and is nice to hear, but to me misses the point.

At least for me, the anxiety isn’t about getting rejected. I don’t take it personally, I recognise the somewhat arbitrary nature of peer review, and that many good papers get rejected — justly and unjustly. But many bad papers also get rejected, and many researchers get rejected often because their research is not publishable quality. If they didn’t, peer review wouldn’t be serving its purpose. The problem for ECRs is that you don’t know which camp you fall into: are the rejections just down to luck of the draw, journal space, editor preference, or whatever, or is it because the quality/topic is not up to standard/of interest? This only eases over time with repeated interaction with the system; and this only for those who choose to and are lucky enough to stay in the career long enough for experiences to accumulate.

Remedying the fear of rejection can’t be done by pointing out that good, respected researchers get rejected too. That won’t address the source of the worry: whether rejection signals the poor quality/ability of the work or not.

I think the problem — and solution — lies in acknowledging the ‘coarse, high-stakes feedback’ that the career is built on. People care because publishing research is seen as the main way — and the only important way — of attaining feedback. It not only points to the quality of the work, but is also what careers are built on. This is especially true at the early stages. In other research careers, you receive much more frequent, much less high-stakes feedback on the work you produce. Projects are more collaborative, involve lots of meetings and review processes, and don’t rely on a process like (external) peer review. And if it’s bad — the consequences are much lower. In academia, the in depth feedback you receive on work can be minimal until the review process. Even once you get to the review process, it can still take months for minimal feedback. And that feedback can be completely dependent on anonymous reviewer selection.*

The way to make ECRs less concerned about the publication (rejection) process then is to provide more of this low stakes feedback on the quality of the work. During a PhD, this is the job of the supervisor(s) and the internal network. But many won’t have this internal network, won’t have engaged supervisors, and won’t have their own network to draw on. What can be done in this case, presuming the high-stakes, high-value and importance of publications is not going away any time soon?

  1. Make work public. By making work public through blogs and social media, feedback can be informal. This takes a bit of culture change; experienced researchers shouldn’t expect the work to be perfect and ECRs shouldn’t be worried about being judged for work in progress. In a (rejected!) article review, the reviewer began by saying that I should not have written a blog on the topic before it had passed review. I see the logic (that it had not been given the seal of quality) but think that the process of making work public and scrutinisable before submission increases quality. Besides, peer review can take years from first submission to eventual publication, and some research is topical and deserves to be out there. This also requires people to actively give their thoughts on the work, either privately or publicly.
  2. Reach out to people. Failing an internal network, ECRs should reach out to people within their field for feedback on drafts. And, of course, those more experienced should see this as normal — even if they decline to do so because of time or inclination!
  3. Make conferences worthwhile. One feedback mechanism we do have is conferences. Typically, you’d deliver a 10–20 minute presentation and have 10–20 minutes of Q&A. This can be useful, depending on the audience, but often the feedback is not on a paper and so not necessarily useful for the overall quality. Providing in depth feedback is a lot of work and it is unreasonable to expect everyone to do it. But having a system in place, like a discussant, can be useful in getting at least one in depth external opinion alongside the more general feedback from the presentation. It’s also important to choose conferences that maximise quality feedback. Even so, conferences can still feel ‘high stakes’, especially towards the end of the PhD or if the audience consists of experienced academics (enter graduate student conferences!).
  4. Establish a group within your cohort. This is directly for current PhDs. One way is to establish a feedback circle within your cohort, even if they do not work on similar topics. This is very low stakes and with a potentially high reward.

Perhaps I am completely wrong, and that for most it really is just being rejected and not what the signals about the quality of work (or not). And there are certainly many other reasons rejection is shit: it’s crucial for a career and you may have been waiting six months for the review, and don’t have time to send it out and hear back before applying for future work. But that seems like a separate issue.

Ultimately, the issue with rejection is whether it signals the overall quality of the work or not, not the rejection itself. And the way to help that issue is by providing more frequent, low-stakes, high-quality feedback independent of the review process. These are just four potential ways I’ve found useful, but a real solution would have to be profession-wide.

*An experienced professor suggested to me that one way of easing the anxiety is to look at the reviews. Are they positive, but you got rejected anyway? Or were they harsh and undermined the study (methodologically, topic wise, etc)? In other words, the content of the rejection can help you decide which camp the paper falls into. The issue is that this depends on reviewers. I have one paper which is currently on its second rejection. The first set of reviewers completely dismissed it, seeing it as uninteresting and poorly executed. The second set of reviewers said it was ‘very important, interesting, well-researched and well-written’. Between the two reviews, I made zero changes. They also suggested not to worry until you hit 6 or more rejects — which is comforting, if not entirely solving the problem.

--

--