Home  | Publications | KMK+25

ResponseRank: Data-Efficient Reward Modeling Through Preference Strength Learning

MCML Authors

Abstract

Binary choices, as often used for reinforcement learning from human feedback (RLHF), convey only the direction of a preference. A person may choose apples over oranges and bananas over grapes, but which preference is stronger? Strength is crucial for decision-making under uncertainty and generalization of preference models, but hard to measure reliably. Metadata such as response times and inter-annotator agreement can serve as proxies for strength, but are often noisy and confounded. We propose ResponseRank to address the challenge of learning from noisy strength signals. Our method uses relative differences in these signals to rank responses to pairwise comparisons by their inferred preference strength. Signals are only considered locally within carefully constructed strata, controlling for systemic variation. This enables robust learning of utility differences consistent with strength-derived rankings, all while making minimal assumptions. Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks: synthetic preference learning (with simulated response times), language modeling (with annotator agreement), and RL control tasks (with simulated episode returns); and (3) the Pearson Distance Correlation (PDC), a novel metric that isolates cardinal utility learning from ordinal accuracy.

inproceedings KMK+25


NeurIPS 2025

39th Conference on Neural Information Processing Systems. San Diego, CA, USA, Nov 30-Dec 07, 2025. To be published.
Conference logo
A* Conference

Authors

T. Kaufmann • Y. Metz • D. A. Keim • E. Hüllermeier

Links

URL

Research Area

 A3 | Computational Models

BibTeXKey: KMK+25

Back to Top