[edit]
Split knowledge transfer in learning under privileged information framework
Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications, PMLR 105:43-52, 2019.
Abstract
Learning Under Privileged Information (LUPI)
enables the inclusion of additional (privileged) information when training machine learning models,
data that is not available when making predictions.
The methodology has been successfully applied to a diverse set of problems from various fields.
SVM+ was the first realization of the LUPI paradigm which showed fast convergence but did not scale well.
To address the scalability issue, knowledge transfer approaches were proposed
to estimate privileged information from standard features in order to construct improved decision rules.
Most available knowledge transfer methods use regression techniques
and the same data for approximating the privileged features as for learning the transfer function.
Inspired by the cross-validation approach,
we propose to partition the training data into $K$ folds and use each fold for learning a transfer function
and the remaining folds for approximations of privileged features—we refer to this as split knowledge transfer.
We evaluate the method using four different experimental setups comprising one synthetic and three real datasets.
The results indicate that our approach leads to improved accuracy
as compared to LUPI with standard knowledge transfer.