From Brittle to Robust: Improving LLM Annotations for SE Optimization
arXiv:2603.22474v1 Announce Type: new
Abstract: Software analytics often builds from labeled data. Labeling can be slow, error prone, and expensive. When human expertise is scarce, SE researchers sometimes ask large language models (LLMs) for the missing labels. While this has been successful in some domains, recent results show that LLM-based labeling has blind spots. Specifically, their labeling is not effective for higher dimensional multi-objective problems. To address this task, we propose a novel LLM prompting strategy called SynthCore. When one opinion fails, SynthCore’s combines multiple separated opinions generated by LLMs (with no knowledge of each others’ answers) into an ensemble of few-shot learners. Simpler than other strategies (e.g. chain-of-thought, multi-agent-debate, etc) SynthCore aggregates results from multiple single prompt sessions (with no crossover between them). SynthCore has been tested on 49 SE multi-objective optimization tasks, handling tasks as diverse as software project management, Makefile configuration, and hyperparameter optimization. SynthCore’s ensemble found optimizations that are better than state-of-the-art alternative approaches (Gaussian Process Models, Tree of Parzen Estimators, active learners in both exploration and exploitation mode). Importantly, these optimizations were made using data labeled by LLMs, without any human opinions. From these experiments, we conclude that ensembles of few shot learners can successfully annotate high dimensional multi-objective tasks. Further, we speculate that other successful few-shot prompting results could be quickly and easily enhanced using SynthCore’s ensemble approach. To support open science, all our data and scripts are available at https://github.com/lohithsowmiyan/lazy-llm/tree/clusters.