Learning from Pairwise Comparisons Under Preference Reversals

Published in "Workshop of Many Facets of Preference Based Learning, ICML 2023"

“We consider the problem of learning to rank from pairwise comparisons in the presence of contextual preference reversals. Preference reversal is a phenomenon well studied in the social psychology and cognitive science literature where users are known to reverse their preference over a pair of alternatives when a carefully chosen third alternative is also presented in the list from which they are required to make a choice. This pertinent effect has been largely ignored in standard representation learning models for pairwise comparisons. In this work, we propose a flexible pairwise comparison model capable of modeling the preference reversal effect. We show that the model is rich enough to capture intransitive preference relations that arise due to reversals. We develop a coupled interpretable neural network based algorithm that learns embeddings for the items from pairwise comparisons. Our network is interpretable as one part of the network learns the standard transitive score based Bradley-Terry-Luce (BTL) Model while the other part explicitly learns the preference reversal effect. We perform experiments to show the efficacy of the proposed network on synthetic datasets against a standard spectral ranking based algorithm and a standard deep network in terms of prediction accuracy on a held-out dataset and the ability of the model to capture intransitive relationships.”