How is AI used in peer review? Insights from the global reviewer community

19 September 2025

Author: Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing

A third of researchers now openly acknowledge using AI tools to support the peer review process. Yet, when those same researchers take on the role of authors, most say they’d feel uncomfortable if AI were used to assess their own work. That’s one of the key findings from our latest survey on peer review in the age of generative AI, completed by 348 researchers.

This contradiction speaks volumes about where we are today: walking the line between use of AI to boost efficiency and protecting scientific integrity.

At IOP Publishing, we currently do not permit the use of generative AI to write peer review reports, either fully or in part. At the same time, we acknowledge the potential of AI to support and improve aspects of the peer review process. Given the pace of change in this field, we wanted to take a closer look at how researchers in the physical sciences currently view the use of AI in peer review, and whether there are ways in which we can use AI in peer review in a responsible way.

32% of respondents said they had used generative AI in peer review in some form, with the relative majority (21% of the 32%) using it purely to improve flow and grammar. 13% said they use AI tools to digest or summarise an article under review. This raises serious concerns, particularly around confidentiality and copyright issues. Only 2% of respondents said they used AI to create a review on their behalf. Although only a very small number of reviewers admitted to using AI to create reviews for them, the majority (57%) would be unhappy if AI was used to write a review on their own manuscript, and 42% would be unhappy if it was used to augment a report. This highlights a clear tension between how AI is being used and how comfortable researchers feel about its role in evaluating their own work.

AI use is increasing

The fact that one in three researchers reported using AI in some aspect of the peer review process reflects a significantly higher adoption rate than previous industry estimates   – such as 12% in Wiley’s 2024 global survey, and 7–17% in AI conference submissions analysed by Liang et al. This could be a discipline-specific pattern as our survey focused solely on researchers in the physical sciences, or perhaps it could be a result of genuinely increased use over time.

But it’s not just the increase of overall usage of AI that’s notable–it’s how researchers are using it. Over half (53%) said they use AI in more than one way in peer review. Using AI to edit for grammar and flow came out as the most common way AI is used in peer review. But more concerning is the proportion (13%) of reviewers saying they upload entire manuscripts into AI chatbots to summarise or digest them. Reviewers are usually required to keep any manuscript they are reviewing strictly confidential, to protect the author’s rights. 

AI acceptance

The survey shows that while many reviewers are comfortable using AI when reviewing others’ work, they’re far less accepting of it being used to review their own. 

Only 19% disagreed with the statement, “I would be unhappy if peer reviewers used AI to write their peer review reports on a manuscript that I had co-authored.” This disconnect between AI use as a reviewer and discomfort as an author highlights a significant trust gap in the peer review process.

Why this polarisation? Part of it may stem from a misunderstanding of what AI is actually capable of. Free-text comments suggest that some respondents believe AI can conduct logical and technical analysis. For example: “I do not use AI for peer review. Only reading, checking reference content and reproducibility. Then analysing methods and results.” However, consumer-accessible generative AI chat bots are not capable of this (yet). Current large language models are not built for these tasks. When publishers set policies about the use of generative AI in peer review, they should be crystal clear about the shortcomings of these tools and why they are seeking to limit their use in writing peer review reports. Otherwise, we risk encouraging misplaced confidence in AI’s abilities, or, conversely, exaggerated fears.

Attitudes are diverging, not settling

Another key finding from our survey is that while AI adoption is rising, views on its future impact are becoming more polarised, not less. It also appears that neutrality is giving way to stronger opinions, suggesting that researchers are increasingly aware, but divided, about AI’s role. 41% of respondents predicted a positive impact of generative AI in peer review, which is significantly higher than the 29% we saw in our 2024 State of Peer Review survey. 

These divisions in opinion are further shaped by gender and career stage. Women expressed more negative views of AI in peer review, with 41% believing that generative AI would have a negative or very negative impact—compared to 35% of men. This aligns with broader research showing that women tend to engage less with AI tools and express greater concern about their ethical implications.

In contrast, junior researchers were more optimistic compared to their more senior colleagues. PhD students emerged as the most frequent users of AI tools in academic work, and 48% of junior researchers believed AI would have a positive impact on peer review, compared to just 34% of senior researchers. This may reflect a generational shift, but it could also point to a lack of experience with the peer review process and its critical role in safeguarding the scientific record. 

These findings matter. If AI tools are developed without recognising these demographic divides, they risk deepening existing inequities in peer review.

Ethics and transparency are critical

Given that AI tools and their outputs are rapidly changing, it remains a challenge to recognise the specific hallmarks of AI-generated reviews. Many respondents stated that AI-generated peer review reports use generic language and lack the depth of subject knowledge that an expert reviewer can provide. It would be very interesting to see a comparison of AI-generated reviews and those written by human experts to assess their relative quality and to help authors and editors better differentiate between the two.

What should publishers do next?

Many academic publishers have introduced increasingly nuanced policies that reflect the delicate balance between efficiency, innovation, and research integrity. Most publisher policies prohibit the use of consumer-facing generative AI tools to conduct analysis or evaluation of submitted manuscripts, or to upload all or part of submitted manuscripts to AI tools, due to the associated privacy, confidentiality, and integrity concerns. On the other hand, many do permit the use of AI to improve the clarity of reviews, provided the reviewer discloses this AI use and accepts full accountability for the review content.

Peer review is the backbone of scholarly publishing, and like any essential part of research communication, there’s always room to make it more efficient and effective. We recognise the potential of AI to do this. As the adoption of AI continues at pace, we need to make sure that we keep human judgement and trust at the heart of the process. Only when we are able to integrate AI in peer review in a transparent, ethical and inclusive way will this technology enhance rather than undermine the integrity of peer review. 


These views are the authors’ own and do not necessarily reflect the views of UKSG.

This UKSG Editorial is taken from the community newsletter UKSG eNews, published every two weeks exclusively for UKSG members. The newsletter provides up-to-date news of current issues and developments within the global knowledge community.

To enjoy UKSG eNews and other member benefits become a UKSG member.