Editorial Feature

How is Science Rebuilding Trust in Published Research?

A wave of unease now surrounds the credibility of published science, as retractions, editorial resignations, and disputes over peer review expose structural weaknesses. A journal system once regarded as the foundation of reliable knowledge is under increasing scrutiny.1

 

Image Credit: KorArkaR/Shutterstock.com

Despite decades of discussion around reproducibility, many of the same problems, such as selective reporting, publication bias, and overworked peer reviewers, persist. The question is no longer whether there is a problem, but how to rebuild confidence in the published record.

The Latest Controversies

The past few years have seen a worrying surge in scandals in academic publishing. In a recent example, Frontiers retracted over 100 articles after uncovering evidence of “systematic manipulation” in the peer review process. 

Around the same time, several editors at Neuroscience Letters resigned over what they described as “publisher interference” in editorial decisions. Nature Communications and Cell faced criticism for publishing studies later shown to be statistically flawed or irreproducible.2-3

Even flagship publishers are not immune. In 2024, Elsevier faced backlash when mass retractions in several of its journals indicated coordinated paper mills, networks that produce fake or plagiarized research for profit.

Meanwhile, the STM Integrity Hub and Retraction Watch continued to document a steady climb in retractions, from a few hundred per year in the early 2000s to over 10,000 annually today.4

These incidents have weakened public faith and intensified internal debates about whether the modern publication system is serving science - or distorting it.

Get all the details: Grab your PDF here!

Why Problems Persist: Structural Incentives Behind Unreliability

The reproducibility crisis is not new. The Open Science Collaboration’s 2015 investigation found that 39 % of psychology findings replicated exposed deep issues that had been known for years. Yet systemic change has been slow.

The persistence of unreliable research is driven by three interconnected forces: incentives, workload, and culture.5

In academia, careers are often built on publication counts and the prestige that accompanies journal publication. The more papers a researcher produces and the more high-impact journals they appear in, the better their chances of funding and promotion. This often pressures scientists to favour quantity over quality, sometimes rushing experiments and selectively reporting positive results.1

On top of this, journals frequently rely on unpaid reviewers who are often overburdened, underrecognized, and given tight deadlines, leading to uneven scrutiny.

With an exponential rise in submissions, estimated at over three million papers annually, journals sometimes accept work without rigorous checks.

The rise of special issues and guest-edited collections, some of which are coordinated by external vendors, has only amplified these risks.1

On top of these factors, even when problems have been identified, change remains slow. Researchers and publications don't stand to gain from transparency; instead, they risk reputational damage or a loss of prestige. And, despite being a natural part of scientific self-correction, retractions remain stigmatized. 

The Role of Journals, Publishers, and Institutions

A woman looking through a stack of papers.Image Credit: nampix/Shutterstock.com

While individual researchers bear responsibility, the structural enablers of unreliable science lie deeper within the academic ecosystem.

Journals play a major role as arbiters of credibility; however, they are often incentivized by volume and visibility, not reliability - after all, businesses need to make a profit. Many operate on an “author-pays” model, where revenue grows in proportion to the number of accepted papers. Others chase sensational or counterintuitive findings that drive media attention and citations, even if these results are fragile.6

Publishers also face the tension between commercial interests and editorial integrity. Large conglomerates, such as Elsevier, Springer Nature, and Wiley, manage thousands of journals with varying quality controls. Scandals involving “paper mills” and fake peer reviewers suggest that oversight mechanisms have not kept pace with operational scale.6

Academic institutions also share responsibility. Hiring and tenure committees still rely heavily on metrics like impact factor or h-index, rewarding behaviours that inflate publication counts, rather than rewarding reliability.

Promising Reforms: Rethinking How We Publish and Review

Despite the challenges, several promising reforms have emerged across disciplines to strengthen the credibility of published research.

Traditional peer review is often opaque, with anonymous reviewers and confidential reports. Advocates argue that transparency may deter misconduct and improve accountability. Open peer review, in which reviews and reviewer identities are published alongside the article, is gaining traction in journals like eLife and F1000Research.7

A hybrid approach is also emerging: journals such as Nature Communications now allow authors to opt in to publish peer-review histories. This “review record” enables readers to evaluate how a paper evolved through critique.7

One of the most impactful innovations has been the Registered Report model. Here, journals review and provisionally accept studies before data collection, based solely on the soundness of the research question and methodology. This reduces publication bias by ensuring that null or negative results are held at equal value to positive ones. Over 300 journals now offer this format, including PLOS Biology and Cortex.8

The rise of preprints is another avenue for improved credibility, as drafts are publicised before formal peer review, opening up scrutiny to a wider audience. 

Platforms such as arXiv, bioRxiv, and medRxiv allow researchers worldwide to comment, critique, and replicate results openly. Some journals and funding agencies even encourage preprint commentary as a legitimate part of scientific discourse.8

Post-publication review platforms are another opportunity for increasing paper reliability. PubPeer, for example, has exposed major errors and misconduct that formal peer reviews have missed. This decentralized oversight can be messy, but it reflects the collaborative and self-correcting spirit of science.8

A growing number of journals now require authors to share data, code, and detailed methodology. The move toward reproducible workflows, supported by platforms such as the Open Science Framework (OSF), helps others validate results independently.8

Some funding agencies, including the National Institute of Health (NIH), have even introduced data management and sharing mandates to promote transparency. Statistical reforms such as preregistered analysis plans and effect size reporting also aim to reduce questionable research practices and selective inference.8

In addition to these varied routes to strong research credibility, technological initiatives are also reshaping integrity monitoring. The STM Integrity Hub, launched in 2023 and mentioned above, provides cross-publisher tools for detecting paper mills, image manipulation, and duplicate submissions.8

Read our latest interview, here!

Building a More Trustworthy Scientific Record

While journals and publishers are central to reform, the scientific community as a whole must act collectively.

Researchers need to embrace transparency and humility, treating reproducibility not as an administrative burden but as an ethical obligation.

Institutions must shift incentives away from publication metrics and toward quality, collaboration, and impact. Initiatives like the Declaration on Research Assessment (DORA) encourage universities to evaluate researchers based on the substance of their contributions, rather than journal prestige.9

Funders can also help to drive change through policy. The Wellcome Trust, the European Research Council, and the NIH have begun requiring open data and preregistration for funded studies. Expanding such mandates can help to normalize good practice across disciplines.9

Conclusion: A Return to Credibility

Science has always evolved through challenge, and today’s credibility crisis is part of that same process of renewal.

Across labs and publishing houses, researchers, editors, and funders are working to restore faith in how knowledge is shared - by making openness, rigour, and accountability the norm, not just an expectation. The future of publishing will depend on collaboration across the scientific community, ensuring that trust is earned through transparency, and that discovery remains anchored in integrity. 

References and Further Readings

  1. Calnan, M.; Kirchin, S.; Roberts, D. L.; Wass, M. N.; Michaelis, M., Understanding and Tackling the Reproducibility Crisis–Why We Need to Study Scientists’ Trust in Data. Pharmacological research 2024, 199, 107043.
  2. Bowie, K., Frontiers Retracts over 100 Articles after Uncovering “Peer Review Manipulation Network”. British Medical Journal Publishing Group: 2025.
  3. Price, M. Journal Editors’ Mass Resignation Marks ‘Sad Day for Paleoanthropology’. https://www.science.org/content/article/journal-editors-mass-resignation-marks-sad-day-paleoanthropology.
  4. STM, Stm Integrity Hub in Action: A Chronicle. 2025. https://stm-assoc.org/stm-integrity-hub-in-action-a-chronicle/
  5. Swiatkowski, W.; Dompnier, B., Replicability Crisis in Social Psychology: Looking at the Past to Find New Pathways for the Future. International Review of Social Psychology 2017, 30, 111-124.
  6. Guedes, J. J. M., Navigating the Madness of Academic Publishing. 2024.
  7. Henriquez, T., Open Peer Review, Pros and Cons from the Perspective of an Early Career Researcher. American Society for Microbiology 1752 N St., NW, Washington, DC: 2023; Vol. 14, pp e01948-23.
  8. Curry, S. et al., Ending Publication Bias: A Values-Based Approach to Surface Null and Negative Results. PLoS Biology 2025, 23, e3003368.
  9. Hatch, A.; Curry, S., Changing How We Evaluate Research Is Difficult, but Not Impossible. Elife 2020, 9, e58654.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Atif Suhail

Written by

Atif Suhail

Atif is a Ph.D. scholar at the Indian Institute of Technology Roorkee, India. He is currently working in the area of halide perovskite nanocrystals for optoelectronics devices, photovoltaics, and energy storage applications. Atif's interest is writing scientific research articles in the field of nanotechnology and material science and also reading journal papers, magazines related to perovskite materials and nanotechnology fields. His aim is to provide every reader with an understanding of perovskite nanomaterials for optoelectronics, photovoltaics, and energy storage applications.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Suhail, Atif. (2025, November 11). How is Science Rebuilding Trust in Published Research?. AZoNano. Retrieved on November 11, 2025 from https://www.azonano.com/article.aspx?ArticleID=6969.

  • MLA

    Suhail, Atif. "How is Science Rebuilding Trust in Published Research?". AZoNano. 11 November 2025. <https://www.azonano.com/article.aspx?ArticleID=6969>.

  • Chicago

    Suhail, Atif. "How is Science Rebuilding Trust in Published Research?". AZoNano. https://www.azonano.com/article.aspx?ArticleID=6969. (accessed November 11, 2025).

  • Harvard

    Suhail, Atif. 2025. How is Science Rebuilding Trust in Published Research?. AZoNano, viewed 11 November 2025, https://www.azonano.com/article.aspx?ArticleID=6969.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.