JMIR Publications Blog

Navigating the Intersection of AI and Peer Review: A Guide for Ethical Integration

Written by Tiffany I. Leung, MD, MPH, FACP, FAMIA, FEFIM | Sep 27, 2024 1:30:00 PM

The integration of artificial intelligence (AI) into the peer review process is a rapidly evolving landscape, offering the promise of streamlined efficiency and increased objectivity. However, this exciting development also highlights several ethical considerations that reviewers should thoughtfully address and navigate. Most importantly, if a reviewer is considering using generative AI to help them with performing a peer review, first check journal policy on whether this is permissible and, if so, then how.

AI's Potential in Peer Review

In general, some of the potential applications of AI in peer review are:

  • Increased efficiency: AI can automate tasks such as plagiarism detection, language correction or translation, and manuscript formatting checks. These would allow for the “human touch” from everyone involved – authors, peer reviewers, editors – to focus on more complex considerations and tasks to support a manuscript’s review towards potential acceptance and publication.
  • Increased objectivity: AI can potentially assist with certain administrative tasks, such as identifying potential conflicts of interest, related review integrity concerns, and the potential impact of bias on communicating review content.
  • Enhanced feedback: AI tools can potentially provide authors with initial detailed and constructive feedback, aiding in their manuscript's improvement before it is evaluated by human reviewers. Authors should always check journal policies relating to the potential use of generative AI in developing a manuscript. 

Ethical Considerations

  • Adherence to journal policies: It's imperative that reviewers familiarize themselves with and strictly adhere to the journal's specific policies concerning the use of AI tools in the review process, especially given the widespread availability of generative AI.
  • Bias mitigation: AI algorithms, while powerful, can inherit biases present in the data they're trained on. Reviewers must remain vigilant in identifying and mitigating such biases to ensure fair evaluations; accountability of content will always lie with the reviewer in this situation.
  • Transparency and disclosure: AI-generated feedback should ideally be transparently disclosed to a level of detail that is required by a journal policy, if it is allowable in the first place.

Summary

The integration of AI into peer review holds great promise, but it's essential that everyone involved in scientific publishing – from authors to reviewers to editors and publishers – navigate this new landscape responsibly. AI should complement or augment, not replace, human expertise. All involved have individual responsibility for adhering to journal policies, mitigating potential biases, maintaining accountability, and prioritizing transparency and disclosure. 

Remember: The future of peer review is a collaborative effort. Explore this editorial for insights on JMIR Publications' policy and perspective on current challenges.

Subscribe to Our Blog Updates!