AI in Peer Review: Fairer or Faster

A Personal Reflection by Nyambura, from the Conference Discussion Session.

AI in Peer Review: Promise, Pitfalls, and Practical Pathways was an interactive session at the Open Science Conference held in Hamburg, Germany, led by Dr. Johanna Havemann and moderated by Nancy Nyambura, both from Access 2 Perspectives, together with  Tim Errington of the Center for Open Science.

The session explored how artificial intelligence is transforming the peer review process in scientific publishing, from showcasing emerging AI tools to examining their ethical and practical implications. Through a dynamic exchange of ideas and audience participation, including an engaging “hot seat”  where attendees were invited on stage to share their views and challenge their peers’ views, the discussion unpacked how AI can be responsibly integrated with human expertise to enhance fairness, transparency, and efficiency in peer review

During this spirited session, I found myself both fascinated and curious about a topic I had heard of but not taken the initiative to explore further. The discussion room was filled with a blend of optimism for AI and uncertainty, a fitting reflection of where the academic community currently stands on this issue.

AI has become the most discussed collaborator in academia, yet it still sits on trial, not only for what it can do, but for what it should do.

The Promise of AI

From my perspective, one of the most tangible benefits of integrating AI into peer review is its ability to reduce the reviewer’s workload. Reviewers often face the challenge of balancing multiple commitments; automation can ease this burden by flagging technical inconsistencies, summarizing long manuscripts, or checking references. AI has the potential to identify and reduce bias in peer review, not forgetting that there is a possibility for AI to inherit bias from its training Data. Author bias against women and people of color, and those in marginalized communities, has been reported. Case in point, the recent story shared by Shima Moein.

Gender bias in peer review is a topic mentioned in one of my comfort shows, The Big Bang Theory. In this episode, Sheldon, Leonard, and Howard discuss ways to encourage more women and girls to enter STEM fields. Sheldon specifically suggests that one way to counter bias in the peer review process is to submit papers under gender-neutral names. This is further elaborated with examples such as S. Smith instead of Samantha Smith, referencing the real-world practice of female professionals using initials to avoid gender prejudice. The question then becomes, how can AI be designed in ways that truly reduce rather than reproduce our own biases? But I digress.

AI-generated summaries, for instance, can help reviewers grasp the essence of a paper in minutes, allowing them to focus their attention where it truly matters, on the originality, logic, and impact of the work. It is not about replacement, but enhancement.

There is also a quiet but important advantage: AI tools can sometimes detect patterns or gaps that humans might overlook. Used thoughtfully, this could support more balanced and equitable evaluations, especially in underresourced editorial teams.

Where the Boundaries Must Hold

Still, the question remains: are humans fully capable of judging another human’s work, and if not, should machines help us do it?

The consensus, at least for now, is that AI should never make final decisions in the peer review process. Publishers already use AI extensively, but in a responsible, assistive capacity. These systems can help identify potential reviewers, suggest improvements, or check bibliographic accuracy, but they stop short of deciding on acceptance or rejection.

This distinction matters. The peer review process is as much cognitive as it is ethical. Critical thinking, empathy, and accountability, the invisible human elements, cannot be automated. This was a key message echoed by Dr.Jo Havemann during her presentation on the topic.

Responsible Experimentation

I believe the path forward lies in experimentation guided by clear standards. Each tool must be assessed not just for its technical performance, but for how it aligns with scholarly values, transparency, fairness, and integrity.

In practical terms, we should start by defining which aspects of research quality we want AI to evaluate, and where human judgment must remain central. That boundary-setting exercise is, in itself, an act of scholarly responsibility.

A Cautious Optimism

AI, in its current form, is far from mature enough to take over editorial processes. Its social, political, and ethical implications are too complex to ignore. Yet, in the right contexts, especially for small or underresourced editorial teams, AI can help create space for more thoughtful human engagement.

Ultimately, peer review will always need a human conscience. But if used responsibly, AI could help us redistribute our effort, freeing scholars to think deeper, question better, and perhaps even review with more empathy.

This discussion has deepened my curiosity to continue exploring how AI can be harnessed ethically and effectively across the academic landscape. I invite fellow researchers, reviewers, publishers, and developers to join this ongoing dialogue, because shaping the responsible use of AI in scholarly publishing requires all of us. The path forward calls for collaboration, openness, and shared accountability. If we get it right, AI could become not just a tool for efficiency, but a catalyst for greater fairness and inclusion in research and publishing.

You can watch the episode referenced above here, and maybe have a laugh in the process 😇

Scroll to Top