This graph offers additional detail on the overall rejection rates at the Royal Society’s Transactions and Proceedings in the second half of the twentieth century. As I discussed in that earlier post, the Royal Society historically had a low rejection rate (around 10-15%), due to the filtering-out of papers that was done pre-submission, since papers had to be submitted via a fellow.
This graph shows that the pattern of rejections may have been broadly similar across all subject fields in the 1950s-60s, but by the later 1970s and 1980s, the Society’s editorial processes were rejecting a higher proportion of papers submitted in the physical sciences than in the biological sciences.
Considered alongside the graph showing papers submitted in those two areas, this raises an interesting set of questions. Throughout the twentieth century, the Society had received and published more papers in physical sciences than biological sciences. Every so often, suggestions would be made that more needed to be done to attract more/better papers in the biological sciences, but the health of the physical science journals seems to have been a less pressing concern – though there was awareness that the Society was not experiencing the postwar growth seen at other physical science journals.
But does this not easily explain why, from the mid-1970s on, the Society was rejecting more of its physical science submissions – and at a time when the number of submissions was falling.
It is possible that a closer look at the rise of ‘discussion meetings’ from the mid-1960s – and the publication of their papers as a thematic issue of Transactions A – might shed some light on this.
Where do the data come from?
The data come from the Society’s annual reports: key performance indicators were regularly published in the Year Book of the Royal Society, and then (in the 1980s) in the Society’s Annual Report. The indicators included the number of papers submitted, and the number rejected. Earlier data on submissions survive, but not (easily) for rejections. They relate to four journals: Philosophical Transactions series A and B; and Proceedings series A and B.
I looked at the numbers of submissions to the Royal Society journals in an earlier post. Here, we look at the relationship between the number of submissions, the rejection rate and the sustainability of peer review.
Graph 1 clearly shows two phases: a period when the Royal Society’s Transactions and Proceedings received a modest number of submissions, and published most of them; and a period when its journals received and published much more, but also rejected much more.
The transition point is in the gap in the data (for an explanation, see below): in 1990, significant changes were made to the organisation and scope of the Royal Society’s journals. Among them, and almost silently, the traditional requirement that papers could only be communicated via a fellow was finally dropped, and direct submissions became the norm. It meant that the Society’s journals were now open to a much wider pool of authors: no longer limited to those personal or professional networks intersected with those of one of the Society’s mostly UK-based fellows. The number of submissions rose; and would rise further in the early twenty-first century when the Society made a concerted effort to reach a global pool of authors.
The requirement for communication via a fellow had been experimentally (and briefly) removed in the mid-1970s. At that time, the editorial office reported that the result was more articles, but few worth publishing, and this was taken as evidence that the communication requirement was not a significant obstacle to the submission of publishable papers; and remained a valuable way of saving editorial effort by filtering out unpublishable papers.
The post-1990 experience suggests that the Society did manage to find significantly more papers worth publishing, but it is interesting that the biggest growth in submissions came in the early 2000s, rather than immediately after the removal of ‘communication’, i.e. it came after the Society made more effort at author-marketing. This suggests that, if you want to attract more papers from a wider diversity of people, simply removing an obstacle is not enough: you have to actively reach out to potential new groups of authors. By 2010, over 60% of authors in Royal Society journals were from outside the UK; it had only been 10% in 1950.
Increased submissions means an increase of editorial and reviewing work, to assess, select and improve the papers for publication. We have written elsewhere (Fyfe et al, 2020) about the challenges that faced the Royal Society’s editorial and review system in the early twentieth century, when the Society relied entirely on its own fellowship for reviewers. The problem of a limited pool of reviewers trying to cope with an expanding pool of submissions had been partly solved in 1969 when the Society began asking non-fellows to act as reviewers.
Graph 1 shows that the early-mid twentieth-century strains on peer review pale in comparison to those of the post-1990 period. It is also clear that the ‘cost/benefit’ ratio of editorial and reviewing work (in time and money) must be considerably higher in an era of high rejection rates, than it had been in the pre-1990 era.
Graph 2 shows the ‘effective rejection rate’: it includes all articles not published, though that may include articles withdrawn by their authors, or never resubmitted after revisions. Historically, the Royal Society’s rejection rate had been only about 10-15%, thanks to the filtering-out performed by the requirement that papers be communicated via a fellow. The post-1990 rejection rates certainly mark a different phase.
Incidentally, the notion that a high rejection rate could be seen as a proxy for quality was definitely not present at the Royal Society before 1990, and nor was it apparent during the 1990s, when editorial staff regarded the newly increased rejection rate of 30% as a real worry, and a threat to the sustainability of the Society’s editorial and review processes.
Where do the data come from?
The data for 1952-1984 come from the Society’s annual reports: key performance indicators were regularly published in the Year Book of the Royal Society, and then (in the 1980s) in the Society’s Annual Report. The indicators included the number of papers submitted, and the number rejected. Earlier data on submissions survive, but not (easily) for rejections. The submissions/rejections in this first batch are for four journals: Philosophical Transactions series A and B; and Proceedings series A and B.
The data for 1994 onwards come from the Society’s current electronic database courtesy of Publishing Director Stuart Taylor. Re-submissions have been excluded for the period after 2000. The submissions/rejections in this batch relate only to the journals then defined as ‘research journals’. Thus, both Transactions are excluded (because they now carry invitation-only thematic review issues); but the new research journals (such as Open Biology and Interface are included).
The gap in the data arises from the fact that the Royal Society stopped publishing details of its number of rejections in the mid-1980s; and the electronic archive only goes back to the mid-1990s. The missing data probably survives in paper form, somewhere in the archive.
For more details on the earlier part of the story, see Fyfe, A., Squazzoni, F., Torny, D., & Dondio, P. (2020). Managing the Growth of Peer Review at the Royal Society Journals, 1865-1965. Science, Technology, & Human Values, 45(3), 405–429. https://doi.org/10.1177/0162243919862868
The Royal Society has been asking for expert advice on papers submitted for publication since the 1830s, and quality (or something like it) has always been one of the elements under consideration. Here, I investigate how the definition of ‘quality in peer review’ has changed over time.
The protections offered by copyright have enabled authors – and their publishers – to make a living from their works since the first copyright act, for ‘the Encouragement of Learning’, was passed in 1710.
Academic authors, however, do not depend upon copyright for their livelihoods. Instead, for many researchers, copyright has come to seem like a tool used by publishers to pursue commercial, rather than scientific interests. Notably, open access advocates have long argued for changes to the ways researchers use copyright, a position that has recently found support in Plan S’ mandate for the use of Creative Commons licences as an alternative.
This piece on the history of peer review at the Royal Society and the problem of unconscious bias originally appeared on the RS Publishing Blog, 10 Sept. 2018, as part of Peer Review Week 2018.
Peer review cannot be done by everyone. It can only be done by people who share certain levels of training and subject-expertise, and have a shared sense of what rigorous experimentation, observation and analysis should look like. That shared expertise and understanding is what should enable alert peer reviewers to reject shoddy experimental methods, flawed analysis and plans for perpetual motion machines.
But as we have increasingly come to realise, any group of people with shared characteristics may display unconscious bias against outsiders, whether that means women, ethnic minorities, or those with unusual methods. While peer review should exclude poor science, it should not exclude good research on the basis of the individual traits or institutional affiliation of the researchers, nor should it dismiss innovative approaches to old problems.
However, it seems socio-cultural and intellectual criteria have often been mixed together in the peer review process, and history can help us to understand why.
In the light of subsequent developments in the management and ownership of scientific journals, the Code’s insistence upon scholarly control of academic journals is notable. It was written at a time when the growing involvement of commercial publishers in academic publishing was becoming visible.
“The present tendency for commercial publishers to initiate new scientific journals in great numbers is causing concern to many people. With the expansion of established sciences and advances into new fields and disciplines it is evident that new journals are necessary.”
With this year’s Peer Review Week focusing on diversity, there has been a lot of discussion of the changes that could or should be made to ensure that peer review is not being done by people who all think the same, or who all share the same implicit biases. Our historical data has some rather striking things to say about the effectiveness of certain kinds of intervention (On why diversity matters to peer review, see ‘Then and now’).
We have been able to count the number of women who were invited to act as reviewers of papers submitted to the Royal Society from the 1920s onwards. We can compare these figures with those for women authors, and women Fellows of the Society.
The number of women fellows steadily increased after 1945 (when women were first admitted to the Royal Society), and continued – very slowly – climbing. It had only reached 3.5% in the 1980s, and was still only 8% in 2017.
The participation rates of women as authors and as reviewers do not follow the same trend.
In the days before photocopiers, getting hold of an offprint from the author was a useful way of getting a copy of the text, tables, images and formulae of a scientific article without having to copy it by hand from a library volume. The ways in which offprints circulated – whether requested by authors in locations where the journal was not available, or distributed strategically by the author to people s/he wanted to impress – is an intriguing element of the sociology of scientific communication.
The history of offprints also illustrates the long history of out-of-commerce circulation of scientific knowledge. Even when the issues, parts or volumes of the published journal were available for public sale, authors could send their private supply of offprints to colleagues, friends and potential sponsors. This long tradition still holds true in the digital world, when printed copies have been replaced by PDFs, but most publishers will still supply authors with a PDF for circulation through their networks.
As well as providing an out-of-commerce route for circulation, offprints also (in certain historical periods) provided a route for more rapid circulation. They were originally available more quickly than the collated issues or bound volumes of the journal in which the article formally appeared.
In this post, we will discuss what the Royal Society’s archive can reveal about the history of offprints.
“Research qualifications are now more and more insisted upon for appointments to academic and other posts, and appointing bodies have often no means of discriminating between important and trivial research, except the particular medium of publication. The publications of the Society have always been recognized as of exceptionally high standard, and special significance has been attached to papers published in them. Should such discrimination between publications become obsolete or even weakened, a spate of trivial papers may easily outweigh, in the minds of lay persons, a few really valuable contributions, with results ultimately detrimental to the best interests of Science.”
So wrote mathematician (and fellow of the Royal Society) Louis Filon, in the summer of 1936.
This graph shows the number of papers submitted to the Royal Society over the course of (roughly) the twentieth century. It includes papers that would ultimately be published in both Transactions and Proceedings, as well as papers that were never published.