AI News Hub Logo

AI News Hub

Bias and Uncertainty in LLM-as-a-Judge Estimation

stat.ML updates on arXiv.org
James Fiedler

arXiv:2605.06939v1 Announce Type: cross Abstract: LLM-as-a-Judge evaluation has become a standard tool for assessing base model performance. However, characterizing performance via the naive estimator, i.e., raw judge outputs, is systematically biased. Recent work has proposed estimators to correct this bias, but their reliability depends critically on judge quality and, for model comparisons, on calibration stability. Sharing calibration across compared models is practically attractive but can introduce severe bias, including cases where the comparison estimate points in the wrong direction with high apparent confidence. We study these failure modes through analytical results, simulations over judge quality ($J$) and cross-model calibration instability ($\Delta J$), and a real-data MMLU-Pro case study with sign reversal. We propose $J$ and $\Delta J$ as diagnostics for when corrected estimates, especially shared-calibration comparisons, are likely unreliable, and provide reporting guidance for LaaJ evaluation.