[edit]
Investigating the Contribution of Privileged Information in Knowledge Transfer LUPI by Explainable Machine Learning
Proceedings of the Twelfth Symposium on Conformal
and Probabilistic Prediction with Applications, PMLR 204:470-484, 2023.
Abstract
Learning Under Privileged Information (LUPI) is a
framework that exploits information that is
available during training only, i.e., the privileged
information (PI), to improve the classification of
objects for which this information is not
available. Knowledge transfer LUPI (KT-LUPI) extends
the framework by inferring PI for the test objects
through separate predictive models. Although the
effectiveness of the framework has been thoroughly
demonstrated, current investigations have provided
limited insights only regarding what parts of the
transferred PI contribute to the improved
performance. A better understanding of this could
not only lead to computational savings but
potentially also to novel strategies for exploiting
PI. We approach the problem by exploring the use of
explainable machine learning through the
state-of-the-art technique SHAP, to analyze the
contribution of the transferred privileged
information. We present results from experiments
with five classification and three regression
datasets, in which we compare the Shapley values of
the PI computed in two different settings; one where
the PI is assumed to be available during both
training and testing, hence representing an ideal
scenario, and a second setting, in which the PI is
available during training only but is transferred to
test objects, through KT-LUPI. The results indicate
that explainable machine learning indeed has the
potential as a tool to gain insights regarding the
effectiveness of KT-LUPI.