Reviewing the Reviewer: Elevating Peer Review Quality through LLM-Guided Feedback

arXiv:2602.10118v1 Announce Type: new
Abstract: Peer review is central to scientific quality, yet reliance on simple heuristics — lazy thinking — has lowered standards. Prior work treats lazy thinking detection as a single-label task, but review segments may exhibit multiple issues, including broader clarity problems, or specificity issues. Turning detection into actionable improvements requires guideline-aware feedback, which is currently missing. We introduce an LLM-driven framework that decomposes reviews into argumentative segments, identifies issues via a neurosymbolic module combining LLM features with traditional classifiers, and generates targeted feedback using issue-specific templates refined by a genetic algorithm. Experiments show our method outperforms zero-shot LLM baselines and improves review quality by up to 92.4%. We also release LazyReviewPlus, a dataset of 1,309 sentences labeled for lazy thinking and specificity.

Liked Liked