{"title":"The Unfair Burden of Rejection on Researchers: Transitioning From Editors as Gatekeepers to Facilitators of Knowledge Production","authors":"Minh-Hoang Nguyen, Quan-Hoang Vuong","doi":"10.1002/leap.2027","DOIUrl":null,"url":null,"abstract":"<p>Academic journals are often seen as key gatekeepers in the dissemination of scientific knowledge, with editors and reviewers playing a central role in evaluating the quality of submissions, distributing professional rewards, and shaping future research (Siler et al. <span>2015</span>). Through the editorial process and peer review, journals determine which research is published and which is rejected. This responsibility demands that rejection decisions be made fairly, transparently, and in the best interest of scientific progress. However, when a paper is rejected, the focus is almost always on the shortcomings of the research itself rather than on the limitations within the journal. Given that the rejection process can significantly impact authors' mental health and career, this article examines the responsibility of journals in rejection decisions stemming from their own limitations by drawing on our 304 recorded rejection letters since 2022. Based on the Granular Interaction Thinking Theory (GITT) perspective on the rejection mechanism (Vuong and Nguyen <span>2024a</span>), we also provide insights into the issue and its broader implications.</p><p>Granular interaction thinking, a theory inspired by quantum mechanics and information theory (Hertog <span>2023</span>; Rovelli <span>2018</span>; Shannon <span>1948</span>), views knowledge production as a dynamic, probabilistic, multi-stage process that requires contributions from many individuals (Vuong and Nguyen <span>2024b</span>). In this view, each scientific work can be seen as a ‘quantum’ of information that is produced through the interactions between new observations, theoretical formulations, and useful knowledge accumulated in previous states of knowledge production. Without any prioritisation or filtering mechanism, if every submitted paper were published, the entropy of the knowledge system would be maximal—useful and flawed information would be mixed indistinguishably, making it very hard for researchers and the public to identify reliable and valuable knowledge. In such a scenario, the probability of identifying reliable and valuable scientific works for subsequent knowledge production would be highly uncertain (Vuong and Nguyen <span>2024b</span>).</p><p>Journals help mitigate this problem by acting as information quality filters, though their effectiveness is probabilistic rather than deterministic. By subjecting manuscripts to editorial screening and peer review, journals increase the likelihood that credible, relevant, and high-quality research enters the circulation of scientific literature. In GITT's terms, the editorial screening and peer review processes help reduce entropy in the knowledge pool, allowing subsequent researchers (the next ‘state’ of knowledge production) to find and build upon reliable and useful scientific works more easily. From this perspective, journals carry the responsibility of being ‘gatekeepers’ of knowledge quality, striving to transmit valuable information from the current state of science (State 1) to the next (State 2) with minimal noise (Vuong and Nguyen <span>2024b</span>).</p><p>However, these evaluation and filtering processes are not infallible (Siler et al. <span>2015</span>). Rejections are not always based solely on a paper's quality; editorial and logistical constraints, strategic and policy considerations, ethical and political factors, and the capabilities and subjectivity of editors and reviewers also influence them. Editorial and logistical constraints, such as a shortage of available reviewers, high submission backlogs, and editors' lack of expertise, can limit a journal's ability to effectively process, evaluate, and disseminate knowledge to the right audience, leading to rejection. For strategic and policy decisions or preferences, some journals prioritize papers they expect to generate high citation counts, potentially sidelining rigorous but less ‘trendy’ research, let alone the controversial or hard-to-interpret. Additionally, there also exist biases toward well-known researchers or institutions, creating barriers for early-career and developing-country researchers seeking to publish their work (Kulal et al. <span>2025</span>; Teplitskiy et al. <span>2022</span>).</p><p>Such strategic and policy decisions, along with inherent biases, stem from the commercialization of science into market mechanisms, in which the knowledge production process is dominantly shaped by a publishing-led model of science. In this system, as Pattinson and Currie (<span>2025</span>) note, ‘behaviours and actions that benefit publishing are rewarded, whether or not they benefit science—and in some cases even if they are to its detriment’. This dynamic can distort evaluation and filtering processes by aligning them with what is profitable for publishers, with the top five companies controlling more than 61% of the publishing market in 2022 (Crotty <span>2023</span>). Importantly, these pressures are not confined to commercial publishers alone; even non-commercial and society-led publishers face strong incentives to optimise publishing operations in order to sustain their existence and development within this system (Pattinson and Currie <span>2025</span>).</p><p>Besides the systematic reasons making the evaluating and filtering process fallible, editors and reviewers are not immune to limitations, subjectivity, or bias (Rubin et al. <span>2023</span>; Smith <span>2006</span>; Srivastava et al. <span>2024</span>). No matter how rigorous the guidelines, they are still human, with inherent blind spots and intellectual constraints. A study or theory that challenges the prevailing paradigm may be dismissed by those who are deeply invested in maintaining the status quo (Macdonald <span>2016</span>). Editors may unconsciously favor work that aligns with their expertise and worldview while viewing unfamiliar or unconventional ideas with skepticism. Additionally, if a manuscript criticizes the work of influential figures on the journal's editorial board or addresses politically sensitive topics, it may be rejected not due to a lack of merit, but to avoid controversy.</p><p>As a result, valuable research may be rejected—not due to major flaws, but because journals must manage limited resources, uphold their brand and prestige, and, at times, avoid publishing works that do not align with existing knowledge frameworks or the expectations of ‘gatekeepers’. In short, countless rejections in history have been unjustifiable, including those manuscripts that would later on deserve a Nobel prize.</p><p>Nevertheless, for individual researchers, journal rejections are more than just filtering mechanisms—they often carry significant emotional and career consequences. Studies have shown that many academics perceive manuscript rejection as a personal failure, experiencing negative emotions such as shame, disillusionment, and self-doubt (Woolley and Barron <span>2009</span>). Repeated rejections can erode confidence, exacerbate impostor syndrome, reduce creativity and productivity, and even lead some to consider leaving academia (Day <span>2011</span>; Hoover and Lucas <span>2024</span>; Jaremka et al. <span>2020</span>). This human aspect underscores the responsibility of journals to handle rejections with care and transparency. A decision letter that lacks clear reasoning—or is unduly harsh in tone—can amplify confusion and resentment. Although the rejection process is intended to filter out specific units of information—the submitted paper—rather than evaluating the researcher's competence, knowledge, research direction, or approach, ambiguous rejection decisions create uncertainty about the reasons for non-acceptance. This uncertainty can challenge the author's self-esteem, professional identity, and career resilience (Horn <span>2016</span>; Walker <span>2019</span>).</p><p>Therefore, when manuscripts are rejected for editorial or logistical reasons, strategic or policy considerations, or biases without clear explanation, it places an unfair burden on authors by leading them to question the quality of their work rather than recognizing the underlying constraints (e.g., journal scope, availability of reviewers, editorial workload, logistical limitations) and preferences (e.g., perceived fit, novelty, trendy topics, author's reputation, citation potential, biases) of the journal.</p><p>To better understand the types of information that journals provide when making rejection decisions, we compiled and analyzed 304 rejection letters received by our team since 2022. These letters resulted from the submission of 65 manuscripts—including both research and perspective articles—to 241 different journals.</p><p>Among these, desk rejections (Type A) were the most prevalent, accounting for 87.5% (266 letters) of the total. For Type B and Type C rejections, editors generally base their decisions on both their assessments and reviewers' evaluations, providing clear and specific reasons for rejection. In contrast, Type A rejection letters lacked clarity, often offering vague or generalized explanations.</p><p>Among the 266 desk-rejection letters, a large proportion cited generic reasons: 40.60% (108 letters) simply stated that the manuscript did not meet the journal's criteria, while 18.8% (50 letters) mentioned the strict evaluation process and low acceptance rate of the journal as the reason for rejection. Such tautological explanations function less as genuine reasons than as reassurances that rejection is a common outcome, offering little to no useful insight for authors. Some journals provided more specific feedback, such as the manuscript is out of scope (99 letters, accounting for 37.22%) or lacking novelty/significance (55 letters, accounting for 20.68%), yet even in these cases, the reasoning remained ambiguous—41.41% of letters citing scope mismatch failed to specify why the manuscript was out of scope, and 47.27% of letters rejecting for lack of novelty/significance did not clarify what aspects were insufficient.</p><p>In contrast to the high percentage of vague rejection letters attributing more or less the rejection decision to researchers' papers, only a small fraction of letters attributed rejections to journal-side limitations—just 2.63% cited a lack of suitable reviewers, 0.75% mentioned a high submission backlog, and only 0.38% indicated that the journal lacked the relevant expertise to assess the manuscript.</p><p>When selecting journals for submission, we primarily relied on keyword matches between our papers and the journal's aims and scope, along with recommendations from Scimago for journals in the same field. While it is acknowledged that some submissions may fall outside a journal's scope or have certain weaknesses, the claim that over 97% of rejections were solely due to authors' shortcomings or the journal's rigorous evaluation standards appears unconvincing.</p><p>Although these figures cannot lead to definitive conclusions, they suggest that journals tend to position themselves as the standard of quality, implicitly framing rejected research as inherently unqualified. This tendency inherently shifts the burden of rejection and its negative consequences onto authors. Given that the publishing model and editors are also subject to limitations, subjectivity, and biases, it is worth questioning whether the current rejection mechanism is functioning properly and fairly when it imposes an undue burden on authors—the individuals who are the main producers of knowledge—and considers this a normal ‘healthy’ process (Macdonald <span>2016</span>). Moreover, when promising papers are rejected and never resubmitted, valuable insights are lost to the scientific record. Should the authors also be held accountable for this loss of knowledge and the wasted resources resulting from such neglect? (Vuong <span>2018</span>).</p><p>Given the challenges discussed, one key recommendation is to foster a culture of co-production of knowledge in the publishing system. In this co-production culture, editors should see themselves as facilitators of knowledge generation and the dissemination process—collaborating with the authors to advance humanity's understanding of the world—rather than gatekeepers of science that try to impose ‘prestigious’ standards on researchers. As facilitators, the roles of editors should be to increase the probability of storing and disseminating reliable and useful knowledge, support authors to refine and polish newly generated insights, and ensure that knowledge is allocated to the right people—those who can recognise and maximise its value and usefulness.</p><p>A key prerequisite for fostering a culture of co-production in scientific publishing is embracing intellectual humility in the evaluation and decision-making process (Vuong and Nguyen <span>2024b</span>). Intellectual humility requires editors not only to approach each manuscript with openness—recognising its potential merit even if it challenges their prior beliefs or expertise—but also to be honest about their own limitations. Transparently communicating these limitations to authors (e.g., difficulty securing qualified reviewers, lack of relevant expertise, high submission backlog) is a clear demonstration of humility and professional integrity.</p><p>Rejections are certainly not pleasant, but they can be made more transparent and constructive (Vuong <span>2020</span>; Vuong and Nguyen <span>2024b</span>). Such a rejection—one that explains the decision and offers guidance—can reduce the stigma and frustration discouraging researchers from pursuing new ideas and can be perceived as part of professional growth, helping researchers refine their work and navigate the publishing landscape more effectively. Thus, transparently communicating the journals' limitations in assessing scientific studies should be widely embraced and endorsed by the scientific community, as it reinforces the role of editors as true facilitators of knowledge production. By ensuring that promising scientific ideas are not prematurely dismissed and by alleviating the undue burden of rejection on authors, editors as facilitators can contribute to a more equitable and progressive scholarly ecosystem. Investigating the systematic templates and guidelines used for rejection letters across journals and publishers could provide valuable insights into current practices, thereby informing future efforts to enhance the transparency and informativeness of rejection letters.</p><p>To foster intellectual humility, editors and reviewers need to be trained to acquire the thinking capability similar to that of the Nature Quotient (NQ), a kind of intelligence that enables humans to perceive, process, and organize information about ecological interdependences and dynamic interactions among complex ecosystems (Vuong and Nguyen <span>2025</span>). With such capabilities, they would be more likely to see themselves as part of a scholarly publishing ecosystem that operates through the direct interactions of authors, editors, and reviewers, as well as the indirect involvement of governments, funders, institutions, and the public—rather than positioning themselves as superior authorities over authors. In fact, several proposed and implemented publishing paradigms and models reflect this ecosystemic vision of knowledge production, in which editors, reviewers, and authors co-produce knowledge. Examples include the science-led publishing paradigm and the publish–review–curate (PRC) model (Corker et al. <span>2024</span>; Pattinson and Currie <span>2025</span>). Within these paradigms, reviewer recommendations function as advisory inputs that assess the strengths and weaknesses of a paper, while editors provide expertise, guidance, and facilitation to coordinate the review process and curate knowledge.</p><p>Some scholars may argue that journals implementing such paradigms and models (such as <i>eLife</i>, MetaROR, and <i>Lifecycle Journal</i>) are less rigorous than conventional ones. Determining the precise effectiveness of these new paradigms and models will require time, experimentation, and validation. Nevertheless, their emergence creates new channels through which disruptive and breakthrough research can be communicated, reducing the risk of information loss inherent in conventional publishing models (Vuong and Nguyen <span>2024b</span>). This value is particularly critical today, as the risk of information loss of valuable knowledge is increasing due to the rapid growth of research outputs, while conventional publishing models are imposing additional layers of quality control (e.g., AI-based methods). It is perhaps not coincidental that the science-led publishing paradigm and the publish–review–curate model bear some resemblance to the editorial approach of <i>Annalen der Physik</i>—the journal in which Albert Einstein published his four groundbreaking papers in 1905—whose acceptance rate reached as high as 90%–95%. The editor, physicist Max Planck, once remarked that his editorial philosophy was ‘to shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them’ (Spicer and Roulet <span>2014</span>). From the perspective of humanity as a whole, ensuring that even a single paper that has a widespread impact like those of Einstein is not forgotten or lost to obscurity would, in itself, more than justify the existence and value of a journal and its underlying model or paradigm.</p><p>In addition, journals should emphasise in editorial training that novelty should not be conflated with a lack of quality. Editors should be encouraged to distinguish between ‘this result is surprising or challenges expectations’ and ‘this result is invalid’. Additionally, they should regularly ask themselves, ‘Am I capable of assessing these unfamiliar results or ideas?’ Likewise, editors can actively seek diverse opinions, especially for papers that challenge mainstream thought. When rejecting a submission, editors can also take a more constructive approach by suggesting alternative venues where the work may be more appropriately received. Such practices help keep valuable research in circulation, increasing its chances of eventually finding a home and contributing to the broader scientific discourse.</p><p>In conclusion, while editors play a crucial role in reducing uncertainty and upholding quality in the knowledge production process, they are also subject to biases and limitations in expertise. However, based on our 304 recorded rejection letters, we found that over 97% of rejections were attributed to shortcomings on the part of the researchers. This pattern suggests that journals often position themselves as the standard of quality, implicitly framing rejected research as inherently unqualified. This practice disproportionately shifts the burden and emotional toll of rejection onto authors, discouraging them from pursuing bold, innovative ideas and, in some cases, even pushing them to leave academia. In addition, appealing to post-review rejection decisions has never seemed practical at all.</p><p>To address this issue, we advocate for a co-production culture within the publishing system—one that reconsiders editors not as gatekeepers but as facilitators of knowledge production. By institutionalising intellectual humility values into such a culture, journals can minimise the risk of dismissing valuable knowledge simply because it does not conform to existing paradigms. At the same time, they can help mitigate the disproportionate stress and pressure rejections impose on researchers, ultimately fostering a more equitable and dynamic scientific ecosystem.</p><p><b>M.-H.N. and Q.-H.V.:</b> conceptualization, writing – review and editing. <b>M.-H.N.:</b> formal analysis, investigation, resources, writing – original draft preparation. <b>Q.-H.V.:</b> supervision, project administration. All authors have read and agreed to the published version of the manuscript.</p><p>The authors have nothing to report.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 4","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.2027","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learned Publishing","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/leap.2027","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Academic journals are often seen as key gatekeepers in the dissemination of scientific knowledge, with editors and reviewers playing a central role in evaluating the quality of submissions, distributing professional rewards, and shaping future research (Siler et al. 2015). Through the editorial process and peer review, journals determine which research is published and which is rejected. This responsibility demands that rejection decisions be made fairly, transparently, and in the best interest of scientific progress. However, when a paper is rejected, the focus is almost always on the shortcomings of the research itself rather than on the limitations within the journal. Given that the rejection process can significantly impact authors' mental health and career, this article examines the responsibility of journals in rejection decisions stemming from their own limitations by drawing on our 304 recorded rejection letters since 2022. Based on the Granular Interaction Thinking Theory (GITT) perspective on the rejection mechanism (Vuong and Nguyen 2024a), we also provide insights into the issue and its broader implications.
Granular interaction thinking, a theory inspired by quantum mechanics and information theory (Hertog 2023; Rovelli 2018; Shannon 1948), views knowledge production as a dynamic, probabilistic, multi-stage process that requires contributions from many individuals (Vuong and Nguyen 2024b). In this view, each scientific work can be seen as a ‘quantum’ of information that is produced through the interactions between new observations, theoretical formulations, and useful knowledge accumulated in previous states of knowledge production. Without any prioritisation or filtering mechanism, if every submitted paper were published, the entropy of the knowledge system would be maximal—useful and flawed information would be mixed indistinguishably, making it very hard for researchers and the public to identify reliable and valuable knowledge. In such a scenario, the probability of identifying reliable and valuable scientific works for subsequent knowledge production would be highly uncertain (Vuong and Nguyen 2024b).
Journals help mitigate this problem by acting as information quality filters, though their effectiveness is probabilistic rather than deterministic. By subjecting manuscripts to editorial screening and peer review, journals increase the likelihood that credible, relevant, and high-quality research enters the circulation of scientific literature. In GITT's terms, the editorial screening and peer review processes help reduce entropy in the knowledge pool, allowing subsequent researchers (the next ‘state’ of knowledge production) to find and build upon reliable and useful scientific works more easily. From this perspective, journals carry the responsibility of being ‘gatekeepers’ of knowledge quality, striving to transmit valuable information from the current state of science (State 1) to the next (State 2) with minimal noise (Vuong and Nguyen 2024b).
However, these evaluation and filtering processes are not infallible (Siler et al. 2015). Rejections are not always based solely on a paper's quality; editorial and logistical constraints, strategic and policy considerations, ethical and political factors, and the capabilities and subjectivity of editors and reviewers also influence them. Editorial and logistical constraints, such as a shortage of available reviewers, high submission backlogs, and editors' lack of expertise, can limit a journal's ability to effectively process, evaluate, and disseminate knowledge to the right audience, leading to rejection. For strategic and policy decisions or preferences, some journals prioritize papers they expect to generate high citation counts, potentially sidelining rigorous but less ‘trendy’ research, let alone the controversial or hard-to-interpret. Additionally, there also exist biases toward well-known researchers or institutions, creating barriers for early-career and developing-country researchers seeking to publish their work (Kulal et al. 2025; Teplitskiy et al. 2022).
Such strategic and policy decisions, along with inherent biases, stem from the commercialization of science into market mechanisms, in which the knowledge production process is dominantly shaped by a publishing-led model of science. In this system, as Pattinson and Currie (2025) note, ‘behaviours and actions that benefit publishing are rewarded, whether or not they benefit science—and in some cases even if they are to its detriment’. This dynamic can distort evaluation and filtering processes by aligning them with what is profitable for publishers, with the top five companies controlling more than 61% of the publishing market in 2022 (Crotty 2023). Importantly, these pressures are not confined to commercial publishers alone; even non-commercial and society-led publishers face strong incentives to optimise publishing operations in order to sustain their existence and development within this system (Pattinson and Currie 2025).
Besides the systematic reasons making the evaluating and filtering process fallible, editors and reviewers are not immune to limitations, subjectivity, or bias (Rubin et al. 2023; Smith 2006; Srivastava et al. 2024). No matter how rigorous the guidelines, they are still human, with inherent blind spots and intellectual constraints. A study or theory that challenges the prevailing paradigm may be dismissed by those who are deeply invested in maintaining the status quo (Macdonald 2016). Editors may unconsciously favor work that aligns with their expertise and worldview while viewing unfamiliar or unconventional ideas with skepticism. Additionally, if a manuscript criticizes the work of influential figures on the journal's editorial board or addresses politically sensitive topics, it may be rejected not due to a lack of merit, but to avoid controversy.
As a result, valuable research may be rejected—not due to major flaws, but because journals must manage limited resources, uphold their brand and prestige, and, at times, avoid publishing works that do not align with existing knowledge frameworks or the expectations of ‘gatekeepers’. In short, countless rejections in history have been unjustifiable, including those manuscripts that would later on deserve a Nobel prize.
Nevertheless, for individual researchers, journal rejections are more than just filtering mechanisms—they often carry significant emotional and career consequences. Studies have shown that many academics perceive manuscript rejection as a personal failure, experiencing negative emotions such as shame, disillusionment, and self-doubt (Woolley and Barron 2009). Repeated rejections can erode confidence, exacerbate impostor syndrome, reduce creativity and productivity, and even lead some to consider leaving academia (Day 2011; Hoover and Lucas 2024; Jaremka et al. 2020). This human aspect underscores the responsibility of journals to handle rejections with care and transparency. A decision letter that lacks clear reasoning—or is unduly harsh in tone—can amplify confusion and resentment. Although the rejection process is intended to filter out specific units of information—the submitted paper—rather than evaluating the researcher's competence, knowledge, research direction, or approach, ambiguous rejection decisions create uncertainty about the reasons for non-acceptance. This uncertainty can challenge the author's self-esteem, professional identity, and career resilience (Horn 2016; Walker 2019).
Therefore, when manuscripts are rejected for editorial or logistical reasons, strategic or policy considerations, or biases without clear explanation, it places an unfair burden on authors by leading them to question the quality of their work rather than recognizing the underlying constraints (e.g., journal scope, availability of reviewers, editorial workload, logistical limitations) and preferences (e.g., perceived fit, novelty, trendy topics, author's reputation, citation potential, biases) of the journal.
To better understand the types of information that journals provide when making rejection decisions, we compiled and analyzed 304 rejection letters received by our team since 2022. These letters resulted from the submission of 65 manuscripts—including both research and perspective articles—to 241 different journals.
Among these, desk rejections (Type A) were the most prevalent, accounting for 87.5% (266 letters) of the total. For Type B and Type C rejections, editors generally base their decisions on both their assessments and reviewers' evaluations, providing clear and specific reasons for rejection. In contrast, Type A rejection letters lacked clarity, often offering vague or generalized explanations.
Among the 266 desk-rejection letters, a large proportion cited generic reasons: 40.60% (108 letters) simply stated that the manuscript did not meet the journal's criteria, while 18.8% (50 letters) mentioned the strict evaluation process and low acceptance rate of the journal as the reason for rejection. Such tautological explanations function less as genuine reasons than as reassurances that rejection is a common outcome, offering little to no useful insight for authors. Some journals provided more specific feedback, such as the manuscript is out of scope (99 letters, accounting for 37.22%) or lacking novelty/significance (55 letters, accounting for 20.68%), yet even in these cases, the reasoning remained ambiguous—41.41% of letters citing scope mismatch failed to specify why the manuscript was out of scope, and 47.27% of letters rejecting for lack of novelty/significance did not clarify what aspects were insufficient.
In contrast to the high percentage of vague rejection letters attributing more or less the rejection decision to researchers' papers, only a small fraction of letters attributed rejections to journal-side limitations—just 2.63% cited a lack of suitable reviewers, 0.75% mentioned a high submission backlog, and only 0.38% indicated that the journal lacked the relevant expertise to assess the manuscript.
When selecting journals for submission, we primarily relied on keyword matches between our papers and the journal's aims and scope, along with recommendations from Scimago for journals in the same field. While it is acknowledged that some submissions may fall outside a journal's scope or have certain weaknesses, the claim that over 97% of rejections were solely due to authors' shortcomings or the journal's rigorous evaluation standards appears unconvincing.
Although these figures cannot lead to definitive conclusions, they suggest that journals tend to position themselves as the standard of quality, implicitly framing rejected research as inherently unqualified. This tendency inherently shifts the burden of rejection and its negative consequences onto authors. Given that the publishing model and editors are also subject to limitations, subjectivity, and biases, it is worth questioning whether the current rejection mechanism is functioning properly and fairly when it imposes an undue burden on authors—the individuals who are the main producers of knowledge—and considers this a normal ‘healthy’ process (Macdonald 2016). Moreover, when promising papers are rejected and never resubmitted, valuable insights are lost to the scientific record. Should the authors also be held accountable for this loss of knowledge and the wasted resources resulting from such neglect? (Vuong 2018).
Given the challenges discussed, one key recommendation is to foster a culture of co-production of knowledge in the publishing system. In this co-production culture, editors should see themselves as facilitators of knowledge generation and the dissemination process—collaborating with the authors to advance humanity's understanding of the world—rather than gatekeepers of science that try to impose ‘prestigious’ standards on researchers. As facilitators, the roles of editors should be to increase the probability of storing and disseminating reliable and useful knowledge, support authors to refine and polish newly generated insights, and ensure that knowledge is allocated to the right people—those who can recognise and maximise its value and usefulness.
A key prerequisite for fostering a culture of co-production in scientific publishing is embracing intellectual humility in the evaluation and decision-making process (Vuong and Nguyen 2024b). Intellectual humility requires editors not only to approach each manuscript with openness—recognising its potential merit even if it challenges their prior beliefs or expertise—but also to be honest about their own limitations. Transparently communicating these limitations to authors (e.g., difficulty securing qualified reviewers, lack of relevant expertise, high submission backlog) is a clear demonstration of humility and professional integrity.
Rejections are certainly not pleasant, but they can be made more transparent and constructive (Vuong 2020; Vuong and Nguyen 2024b). Such a rejection—one that explains the decision and offers guidance—can reduce the stigma and frustration discouraging researchers from pursuing new ideas and can be perceived as part of professional growth, helping researchers refine their work and navigate the publishing landscape more effectively. Thus, transparently communicating the journals' limitations in assessing scientific studies should be widely embraced and endorsed by the scientific community, as it reinforces the role of editors as true facilitators of knowledge production. By ensuring that promising scientific ideas are not prematurely dismissed and by alleviating the undue burden of rejection on authors, editors as facilitators can contribute to a more equitable and progressive scholarly ecosystem. Investigating the systematic templates and guidelines used for rejection letters across journals and publishers could provide valuable insights into current practices, thereby informing future efforts to enhance the transparency and informativeness of rejection letters.
To foster intellectual humility, editors and reviewers need to be trained to acquire the thinking capability similar to that of the Nature Quotient (NQ), a kind of intelligence that enables humans to perceive, process, and organize information about ecological interdependences and dynamic interactions among complex ecosystems (Vuong and Nguyen 2025). With such capabilities, they would be more likely to see themselves as part of a scholarly publishing ecosystem that operates through the direct interactions of authors, editors, and reviewers, as well as the indirect involvement of governments, funders, institutions, and the public—rather than positioning themselves as superior authorities over authors. In fact, several proposed and implemented publishing paradigms and models reflect this ecosystemic vision of knowledge production, in which editors, reviewers, and authors co-produce knowledge. Examples include the science-led publishing paradigm and the publish–review–curate (PRC) model (Corker et al. 2024; Pattinson and Currie 2025). Within these paradigms, reviewer recommendations function as advisory inputs that assess the strengths and weaknesses of a paper, while editors provide expertise, guidance, and facilitation to coordinate the review process and curate knowledge.
Some scholars may argue that journals implementing such paradigms and models (such as eLife, MetaROR, and Lifecycle Journal) are less rigorous than conventional ones. Determining the precise effectiveness of these new paradigms and models will require time, experimentation, and validation. Nevertheless, their emergence creates new channels through which disruptive and breakthrough research can be communicated, reducing the risk of information loss inherent in conventional publishing models (Vuong and Nguyen 2024b). This value is particularly critical today, as the risk of information loss of valuable knowledge is increasing due to the rapid growth of research outputs, while conventional publishing models are imposing additional layers of quality control (e.g., AI-based methods). It is perhaps not coincidental that the science-led publishing paradigm and the publish–review–curate model bear some resemblance to the editorial approach of Annalen der Physik—the journal in which Albert Einstein published his four groundbreaking papers in 1905—whose acceptance rate reached as high as 90%–95%. The editor, physicist Max Planck, once remarked that his editorial philosophy was ‘to shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them’ (Spicer and Roulet 2014). From the perspective of humanity as a whole, ensuring that even a single paper that has a widespread impact like those of Einstein is not forgotten or lost to obscurity would, in itself, more than justify the existence and value of a journal and its underlying model or paradigm.
In addition, journals should emphasise in editorial training that novelty should not be conflated with a lack of quality. Editors should be encouraged to distinguish between ‘this result is surprising or challenges expectations’ and ‘this result is invalid’. Additionally, they should regularly ask themselves, ‘Am I capable of assessing these unfamiliar results or ideas?’ Likewise, editors can actively seek diverse opinions, especially for papers that challenge mainstream thought. When rejecting a submission, editors can also take a more constructive approach by suggesting alternative venues where the work may be more appropriately received. Such practices help keep valuable research in circulation, increasing its chances of eventually finding a home and contributing to the broader scientific discourse.
In conclusion, while editors play a crucial role in reducing uncertainty and upholding quality in the knowledge production process, they are also subject to biases and limitations in expertise. However, based on our 304 recorded rejection letters, we found that over 97% of rejections were attributed to shortcomings on the part of the researchers. This pattern suggests that journals often position themselves as the standard of quality, implicitly framing rejected research as inherently unqualified. This practice disproportionately shifts the burden and emotional toll of rejection onto authors, discouraging them from pursuing bold, innovative ideas and, in some cases, even pushing them to leave academia. In addition, appealing to post-review rejection decisions has never seemed practical at all.
To address this issue, we advocate for a co-production culture within the publishing system—one that reconsiders editors not as gatekeepers but as facilitators of knowledge production. By institutionalising intellectual humility values into such a culture, journals can minimise the risk of dismissing valuable knowledge simply because it does not conform to existing paradigms. At the same time, they can help mitigate the disproportionate stress and pressure rejections impose on researchers, ultimately fostering a more equitable and dynamic scientific ecosystem.
M.-H.N. and Q.-H.V.: conceptualization, writing – review and editing. M.-H.N.: formal analysis, investigation, resources, writing – original draft preparation. Q.-H.V.: supervision, project administration. All authors have read and agreed to the published version of the manuscript.
重要的是,这些压力并不仅仅局限于商业发行商;甚至非商业和社会主导的出版商也面临着优化出版运营的强烈动机,以维持他们在这个系统中的生存和发展(Pattinson and Currie 2025)。除了导致评估和过滤过程容易出错的系统原因外,编辑和审稿人也不能避免局限性、主观性或偏见(Rubin et al. 2023; Smith 2006; Srivastava et al. 2024)。无论指导方针多么严格,他们仍然是人类,有固有的盲点和智力限制。一项挑战主流范式的研究或理论可能会被那些深陷于维持现状的人所忽视(Macdonald 2016)。编辑可能会不自觉地喜欢与他们的专业知识和世界观一致的作品,而对不熟悉或非传统的想法持怀疑态度。此外,如果一篇文章批评了期刊编辑委员会中有影响力的人物的工作或涉及政治敏感话题,它可能会被拒绝,不是因为缺乏价值,而是为了避免争议。结果,有价值的研究可能会被拒绝——不是因为重大缺陷,而是因为期刊必须管理有限的资源,维护自己的品牌和声望,有时还要避免发表不符合现有知识框架或“看门人”期望的作品。简而言之,历史上无数的拒绝都是不合理的,包括那些后来应该获得诺贝尔奖的手稿。然而,对于个别研究人员来说,期刊拒稿不仅仅是过滤机制——它们往往会带来重大的情感和职业后果。研究表明,许多学者认为论文被拒是个人的失败,经历了羞耻、幻灭和自我怀疑等负面情绪(Woolley and Barron 2009)。反复的拒绝会侵蚀信心,加剧骗子综合症,降低创造力和生产力,甚至导致一些人考虑离开学术界(Day 2011; Hoover and Lucas 2024; Jaremka et al. 2020)。这一人性化的方面强调了期刊有责任谨慎和透明地处理退稿。一封缺乏明确理由的决定信——或者语气过于严厉——会放大困惑和怨恨。拒稿过程的目的是过滤掉提交论文的特定信息单元,而不是评估研究者的能力、知识、研究方向或方法,但模棱两可的拒稿决定会给不被接受的原因带来不确定性。这种不确定性会挑战作者的自尊、职业认同和职业弹性(Horn 2016; Walker 2019)。因此,当稿件因编辑或后勤原因、战略或政策考虑或没有明确解释的偏见而被拒绝时,这给作者带来了不公平的负担,导致他们质疑自己的工作质量,而不是认识到潜在的限制(例如,期刊范围、审稿人的可用性、编辑工作量、后勤限制)和偏好(例如,感知的适合性、新颖性、流行主题、作者的声誉、引用潜力)。期刊的偏见。为了更好地了解期刊在做出拒稿决定时提供的信息类型,我们汇编并分析了自2022年以来我们团队收到的304封拒稿信。这些信件来自于提交给241个不同期刊的65篇手稿——包括研究和观点文章。其中,办公桌拒绝(A类)最为普遍,占总数的87.5%(266封信)。对于B类和C类拒绝,编辑通常根据他们的评估和审稿人的评估做出决定,提供明确而具体的拒绝理由。相比之下,A型拒绝信缺乏清晰度,通常提供模糊或笼统的解释。在266封退稿信中,有很大比例引用了一般性原因:40.60%(108封)的退稿原因仅仅是稿件不符合期刊的标准,18.8%(50封)的退稿原因是评审过程严格,期刊的接受率低。这种反复的解释与其说是真正的原因,不如说是保证拒绝是常见的结果,对作者来说几乎没有什么有用的见解。一些期刊提供了更具体的反馈,如稿件超出范围(99个字母,占37.22%)或缺乏新颖性/重要性(55个字母,占20.68%),但即使在这些情况下,理由仍然含糊不清——41.41%的引用范围不匹配的信函未能说明稿件为什么超出范围,47.27%的拒绝信函因缺乏新颖性/重要性而没有说明哪些方面不足。 与将拒绝决定或多或少归因于研究人员论文的模糊拒绝信的高比例相反,只有一小部分拒绝信将拒绝归因于期刊方面的限制——只有2.63%的拒绝信引用了缺乏合适的审稿人,0.75%的拒绝信提到了大量的提交积压,只有0.38%的拒绝信表明期刊缺乏评估手稿的相关专业知识。在选择要提交的期刊时,我们主要依靠论文与期刊目标和范围之间的关键词匹配,以及Scimago对同一领域期刊的推荐。虽然承认一些投稿可能超出了期刊的范围或有某些弱点,但声称超过97%的拒绝仅仅是由于作者的缺点或期刊严格的评估标准似乎没有说服力。尽管这些数字不能得出明确的结论,但它们表明,期刊倾向于将自己定位为质量标准,含蓄地将被拒绝的研究定性为本质上不合格。这种趋势将拒绝的负担及其负面后果转嫁给了作者。鉴于出版模式和编辑也受到限制、主观性和偏见的影响,值得质疑的是,当当前的拒绝机制给作者(作为知识的主要生产者)施加了不适当的负担,并认为这是一个正常的“健康”过程时,它是否正常、公平地发挥作用(Macdonald 2016)。此外,当有前途的论文被拒绝并且再也没有提交时,有价值的见解就会消失在科学记录中。作者是否也应该为这种知识的损失和这种忽视所造成的资源浪费负责?(Vuong 2018)。鉴于所讨论的挑战,一个关键的建议是在出版系统中培养一种共同生产知识的文化。在这种合作文化中,编辑应该将自己视为知识产生和传播过程的促进者——与作者合作促进人类对世界的理解——而不是试图将“声望”标准强加给研究人员的科学看门人。作为促进者,编辑的角色应该是增加存储和传播可靠和有用知识的可能性,支持作者提炼和完善新产生的见解,并确保知识分配给合适的人——那些能够识别并最大化其价值和有用性的人。在科学出版中培养合作生产文化的一个关键先决条件是在评估和决策过程中接受智力上的谦逊(Vuong和Nguyen 2024b)。学术上的谦逊要求编辑不仅要以开放的态度对待每一篇稿件——即使它挑战了他们先前的信念或专业知识,也要承认它的潜在价值——而且要诚实地面对自己的局限性。透明地向作者传达这些限制(例如,难以获得合格的审稿人,缺乏相关的专业知识,大量的提交积压)是谦卑和职业诚信的明确体现。拒绝当然是不愉快的,但它们可以变得更加透明和建设性(Vuong 2020; Vuong和Nguyen 2024b)。这样的拒绝——解释了决定并提供了指导——可以减少阻碍研究人员追求新想法的耻辱和挫折感,可以被视为专业成长的一部分,帮助研究人员改进他们的工作,更有效地驾驭出版环境。因此,透明地传达期刊在评估科学研究方面的局限性应该得到科学界的广泛接受和支持,因为它加强了编辑作为知识生产真正促进者的作用。通过确保有前途的科学思想不会过早被驳回,并减轻作者被拒绝的负担,作为促进者的编辑可以为一个更加公平和进步的学术生态系统做出贡献。调查各期刊和出版商在拒稿信中使用的系统模板和指南,可以为当前的实践提供有价值的见解,从而为未来提高拒稿信的透明度和信息量提供信息。为了培养智力上的谦逊,编辑和审稿人需要接受训练,以获得类似于自然商(NQ)的思维能力,这是一种智能,使人类能够感知、处理和组织有关复杂生态系统之间生态相互依存和动态相互作用的信息(Vuong和Nguyen 2025)。 有了这样的能力,他们更有可能将自己视为学术出版生态系统的一部分,这个生态系统通过作者、编辑和审稿人的直接互动,以及政府、资助者、机构和公众的间接参与来运作,而不是将自己定位为高于作者的权威。事实上,一些提出和实施的出版范式和模型反映了这种知识生产的生态系统愿景,其中编辑、审稿人和作者共同生产知识。例子包括以科学为主导的出版范式和出版-评论-策划(PRC)模型(Corker et al. 2024; Pattinson and Currie 2025)。在这些范例中,审稿人的建议作为评估论文优缺点的咨询输入,而编辑则提供专业知识、指导和促进,以协调审稿过程和整理知识。一些学者可能会认为,采用这种范式和模型的期刊(如eLife、metor和Lifecycle Journal)不如传统期刊严谨。确定这些新范例和模型的确切有效性需要时间、实验和验证。然而,它们的出现创造了新的渠道,通过这些渠道,破坏性和突破性的研究可以被传播,减少了传统出版模式固有的信息丢失风险(Vuong和Nguyen 2024b)。这一价值在今天尤为重要,因为由于研究产出的快速增长,有价值知识的信息丢失的风险正在增加,而传统的出版模式正在施加额外的质量控制层(例如,基于人工智能的方法)。科学主导的出版模式和出版-评论-策划模式与物理学年鉴(爱因斯坦于1905年在该杂志上发表了他的四篇开创性论文)的编辑方式有些相似,其接受率高达90%-95%,这或许并非巧合。编辑、物理学家马克斯·普朗克曾说过,他的编辑哲学是“避免因压制奇怪的观点而受到指责,而不是因过于温和地评价它们而受到指责”(Spicer and Roulet 2014)。从整个人类的角度来看,确保即使是一篇像爱因斯坦的论文那样具有广泛影响的论文也不会被遗忘或湮没,这本身就比证明期刊及其潜在模型或范式的存在和价值更有意义。此外,期刊应在编辑培训中强调,不应将新颖性与缺乏质量混为一谈。应该鼓励编辑区分“这个结果令人惊讶或挑战预期”和“这个结果无效”。此外,他们应该经常问自己,“我有能力评估这些不熟悉的结果或想法吗?”同样,编辑可以积极寻求不同的意见,尤其是对那些挑战主流思想的论文。当拒绝提交时,编辑也可以采取更有建设性的方法,建议在其他更合适的地方接受作品。这种做法有助于保持有价值的研究的流通,增加其最终找到归宿的机会,并为更广泛的科学论述做出贡献。总之,虽然编辑在减少不确定性和维护知识生产过程中的质量方面发挥着至关重要的作用,但他们也受到专业知识的偏见和限制。然而,根据我们记录的304封拒绝信,我们发现超过97%的拒绝是由于研究人员的缺点。这种模式表明,期刊经常将自己定位为质量标准,含蓄地将被拒绝的研究视为本质上不合格。这种做法不成比例地将被拒的负担和情感代价转嫁给了作者,使他们不愿追求大胆、创新的想法,在某些情况下,甚至迫使他们离开学术界。此外,对审查后的拒绝决定提出上诉似乎从来都不现实。为了解决这个问题,我们提倡在出版系统内建立一种合作文化——重新考虑编辑不是把关人,而是知识生产的促进者。通过将学术谦逊的价值观制度化,纳入这样一种文化,期刊可以将仅仅因为不符合现有范式而忽视有价值知识的风险降至最低。与此同时,它们可以帮助减轻被拒给研究人员带来的不成比例的压力和压力,最终培育出一个更公平、更有活力的科学生态系统。q - h - v:概念化,写作-审查和编辑。正式分析,调查,资源,写作-准备原稿。q - h - v:监督、项目管理。 所有作者都已阅读并同意稿件的出版版本。作者没有什么可报告的。作者声明无利益冲突。