What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features

Pasha Khosravi, Yitao Liang, YooJung Choi, and Guy Van den Broeck.
In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019

PDF  BibTex  Slides  Code 

Abstract

While discriminative classifiers often yield strong predictive performance, missing feature values at prediction time can still be a challenge. Classifiers may not behave as expected under certain ways of substituting the missing values, since they inherently make assumptions about the data distribution they were trained on. In this paper, we propose a novel framework that classifies examples with missing features by computing the expected prediction with respect to a feature distribution. Moreover, we use geometric programming to learn a naive Bayes distribution that embeds a given logistic regression classifier and can efficiently take its expected predictions. Empirical evaluations show that our model achieves the same performance as the logistic regression with all features observed, and outperforms standard imputation techniques when features go missing during prediction time. Furthermore, we demonstrate that our method can be used to generate “sufficient explanations” of logistic regression classifications, by removing features that do not affect the classification.

Citation

@inproceedings{KhosraviIJCAI19,
  author    = {Khosravi, Pasha and Liang, Yitao and Choi, YooJung and Van den Broeck, Guy},
  title     = {What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features},
  booktitle = {Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI)},
  month     = {aug},
  year      = {2019},
}

Preliminary version appeared in the ICML 2019 Workshop on Tractable Probabilistic Modeling (TPM).