AI Advent 2025 β Day 17: AI and peer review
π Day 17 of 25– AI and peer review
AI is increasingly appearing in the peer-review ecosystem β from assisting reviewers with summarisation to helping editors triage submissions. In 2025, the challenge is not whether AI will be used in peer review, but how to use it without undermining trust, fairness, and scholarly judgement.
Todayβs AI insight
AI can support peer review by handling scale and repetition, but it cannot replace the epistemic judgement that review requires. Models are good at spotting patterns, missing citations, unclear structure, statistical inconsistencies but they lack the disciplinary context needed to assess novelty, significance, or theoretical contribution.
Used carefully, AI tools can act as reviewer aids, not reviewers: flagging potential issues, summarising long manuscripts, or highlighting sections that warrant closer human attention.
Why this matters
Peer review is a cornerstone of scientific credibility. Fully automating it risks reinforcing existing biases, penalising unconventional work, and creating opaque decision processes that authors cannot challenge.
At the same time, reviewer overload is real. Submission volumes continue to rise, and delays undermine both careers and the pace of science. AI can help reduce cognitive load without displacing human responsibility, but only if roles and limits are explicit.
A simple example
An editor might use AI to:
- Generate a concise summary of a long submission
- Flag potential ethical concerns, missing data availability statements, or unclear methodology
- Identify overlapping text with prior publications
Human reviewers then evaluate scientific merit, assess the strength of evidence, and make the final judgement. The AIβs role is assistive and auditable, not decisive.
Try this today
β
If you review papers, experiment with AI to summarise or structure your own notes β but write the judgement yourself.
β
If you edit or manage review workflows, define clear boundaries for AI use and communicate them transparently to authors and reviewers.
β
Document where AI tools were used, so decisions remain explainable and defensible.
Reflection
Peer review depends on accountability and reasoned disagreement, not just pattern recognition. In 2025, the most responsible use of AI in peer review strengthens human judgement rather than replacing it, helping the system scale while preserving the values that make science trustworthy.