Distinguishing Feature Model for Learning From Pairwise Comparisons

Published in "Workshop of Many Facets of Preference Based Learning, ICML 2023"
Elisha Parhi , Arun Rajkumar

“We consider the problem of learning to predict outcomes of unseen pairwise comparisons over a set of items when a small set of pairwise comparisons are available. When the underlying preferences are intransitive in nature, which is common occurrence in real world data sets, this becomes a challenging problem both in terms of modeling and learning. Towards this, we introduce a flexible and natural parametric model for pairwise comparisons that we call the \emph{Distinguishing Feature Model} (DF). Under this model, the items have an unknown but fixed embeddings and the pairwise comparison between a pair of items depends probabilistically on the feature in the embedding that can best distinguish the items. The proposed DF model generalizes the popular transitive Bradley-Terry-Luce model and with embedding dimension as low as d = 3 , can capture arbitrarily long cyclic dependencies. Furthermore, we explicitly show the type of preference relations that cannot be modelled under the DF model for d = 3 . On the algorithmic side, we propose a Siamese style neural network architecture which can be used learn to predict well under the DF model while at the same time being interpretable in the sense that the embeddings learnt can be extracted directly from the learnt model. Our experimental results show that the model is either comparable or outperforms standard baselines in both synthetic and real world data-sets.”