Group Fairness by Probabilistic Modeling with Latent Fair Decisions

YooJung Choi, Meihua Dang, and Guy Van den Broeck.
In Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021

PDF  BibTex  Slides  Code 


We learn fair classifiers from data containing biased labels by explicitly modeling a hidden, unbiased label and learning a probability distribution with fairness guarantees encoded as independencies.


Machine learning systems are increasingly being used to make impactful decisions such as loan applications and criminal justice risk assessments, and as such, ensuring fairness of these systems is critical. This is often challenging as the labels in the data are biased. This paper studies learning fair probability distributions from biased data by explicitly modeling a latent variable that represents a hidden, unbiased label. In particular, we aim to achieve demographic parity by enforcing certain independencies in the learned model. We also show that group fairness guarantees are meaningful only if the distribution used to provide those guarantees indeed captures the real-world data. In order to closely model the data distribution, we employ probabilistic circuits, an expressive and tractable probabilistic model, and propose an algorithm to learn them from incomplete data. We show on real-world datasets that our approach not only is a better model of how the data was generated than existing methods but also achieves competitive accuracy. Moreover, we also evaluate our approach on a synthetic dataset in which observed labels indeed come from fair labels but with added bias, and demonstrate that the fair labels are successfully retrieved.


	author    = {Choi, YooJung and Dang, Meihua and Van den Broeck, Guy},
	title     = {Group Fairness by Probabilistic Modeling with Latent Fair Decisions},
	booktitle = {Proceedings of the 35th AAAI Conference on Artificial Intelligence},
	month   = {Feb},
	year    = {2021},

Preliminary version appeared in the NeurIPS 2020 Workshop on Algorithmic Fairness through the Lens of Causality and Interpretability (AFCI).