Large Language Models for Departmental Expert Review Quality Scores

arXiv:2601.18945v1 Announce Type: new
Abstract: Presumably, peer reviewers and Large Language Models (LLMs) do very different things when asked to assess research. Still, recent evidence has shown that LLMs have a moderate ability to predict quality scores of published academic journal articles. One untested potential application of LLMs is for internal departmental review, which may be used to support appointment and promotion decisions or to select outputs for national assessments. This study assesses for the first time the extent to which (1) LLM quality scores align with internal departmental quality ratings and (2) LLM reports differ from expert reports. Using a private dataset of 58 published journal articles from the School of Information at the University of Sheffield, together with internal departmental quality ratings and reports, ChatGPT-4o, ChatGPT-4o mini, and Gemini 2.0 Flash scores correlate positively and moderately with internal departmental ratings, whether the input is just title/abstract or the full text. Whilst departmental reviews tended to be more specific and showing field-level knowledge, ChatGPT reports tended to be standardised, more general, repetitive, and with unsolicited suggestions for improvement. The results therefore (a) confirm the ability of LLMs to guess the quality scores of published academic research moderately well, (b) confirm that this ability is a guess rather than an evaluation (because it can be made based on title/abstract alone), (c) extend this ability to internal departmental expert review, and (d) show that LLM reports are less insightful than human expert reports for published academic journal articles.

Liked Liked