Fair Image Generation from Pre-trained Models by Probabilistic Modeling

Mahdi Ahmadi, John Leland, Agneet Chatterjee, and YooJung Choi.
In the Safe Generative AI Workshop at NeurIPS, 2024

BibTex 

Abstract

The production of high-fidelity images by generative models has been transformative to the space of artificial intelligence. Yet, while the generated images are of high quality, the images tend to mirror biases present in the dataset they are trained on. While there has been an influx of work to tackle fair ML broadly, existing works on fair image generation typically rely on modifying the model architecture or fine-tuning an existing generative model which requires costly retraining time. In this paper, we use a family of tractable probabilistic models called probabilistic circuits (PCs), which can be equipped to a pre-trained generative model to produce fair images without retraining. We show that for a given pre-trained generative model, our method only requires a small fair reference dataset to train the PC, removing the need to collect a large (fair) dataset to retrain the generative model. Our experimental results show that our proposed method can achieve a balance between training resources and ensuring fairness and quality of generated images.

Citation

@inproceedings{AhmadiSGAI24,
  author    = {Ahmadi, Mahdi and Leland, John and Chatterjee, Agneet and Choi, YooJung},
  title     = {Fair Image Generation from Pre-trained Models by Probabilistic Modeling},
  booktitle = {Safe Generative AI Workshop},
  month     = {dec},
  year      = {2024},
}