Split knowledge transfer in learning under privileged information framework

Niharika Gauraha, Fabian Söderdahl, Ola Spjuth
Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications, PMLR 105:43-52, 2019.

Abstract

Learning Under Privileged Information (LUPI) enables the inclusion of additional (privileged) information when training machine learning models, data that is not available when making predictions. The methodology has been successfully applied to a diverse set of problems from various fields. SVM+ was the first realization of the LUPI paradigm which showed fast convergence but did not scale well. To address the scalability issue, knowledge transfer approaches were proposed to estimate privileged information from standard features in order to construct improved decision rules. Most available knowledge transfer methods use regression techniques and the same data for approximating the privileged features as for learning the transfer function. Inspired by the cross-validation approach, we propose to partition the training data into $K$ folds and use each fold for learning a transfer function and the remaining folds for approximations of privileged features—we refer to this as split knowledge transfer. We evaluate the method using four different experimental setups comprising one synthetic and three real datasets. The results indicate that our approach leads to improved accuracy as compared to LUPI with standard knowledge transfer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v105-gauraha19a, title = {Split knowledge transfer in learning under privileged information framework}, author = {Gauraha, Niharika and S\"oderdahl, Fabian and Spjuth, Ola}, booktitle = {Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications}, pages = {43--52}, year = {2019}, editor = {Gammerman, Alex and Vovk, Vladimir and Luo, Zhiyuan and Smirnov, Evgueni}, volume = {105}, series = {Proceedings of Machine Learning Research}, month = {09--11 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v105/gauraha19a/gauraha19a.pdf}, url = {https://proceedings.mlr.press/v105/gauraha19a.html}, abstract = {Learning Under Privileged Information (LUPI) enables the inclusion of additional (privileged) information when training machine learning models, data that is not available when making predictions. The methodology has been successfully applied to a diverse set of problems from various fields. SVM+ was the first realization of the LUPI paradigm which showed fast convergence but did not scale well. To address the scalability issue, knowledge transfer approaches were proposed to estimate privileged information from standard features in order to construct improved decision rules. Most available knowledge transfer methods use regression techniques and the same data for approximating the privileged features as for learning the transfer function. Inspired by the cross-validation approach, we propose to partition the training data into $K$ folds and use each fold for learning a transfer function and the remaining folds for approximations of privileged features—we refer to this as split knowledge transfer. We evaluate the method using four different experimental setups comprising one synthetic and three real datasets. The results indicate that our approach leads to improved accuracy as compared to LUPI with standard knowledge transfer.} }
Endnote
%0 Conference Paper %T Split knowledge transfer in learning under privileged information framework %A Niharika Gauraha %A Fabian Söderdahl %A Ola Spjuth %B Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications %C Proceedings of Machine Learning Research %D 2019 %E Alex Gammerman %E Vladimir Vovk %E Zhiyuan Luo %E Evgueni Smirnov %F pmlr-v105-gauraha19a %I PMLR %P 43--52 %U https://proceedings.mlr.press/v105/gauraha19a.html %V 105 %X Learning Under Privileged Information (LUPI) enables the inclusion of additional (privileged) information when training machine learning models, data that is not available when making predictions. The methodology has been successfully applied to a diverse set of problems from various fields. SVM+ was the first realization of the LUPI paradigm which showed fast convergence but did not scale well. To address the scalability issue, knowledge transfer approaches were proposed to estimate privileged information from standard features in order to construct improved decision rules. Most available knowledge transfer methods use regression techniques and the same data for approximating the privileged features as for learning the transfer function. Inspired by the cross-validation approach, we propose to partition the training data into $K$ folds and use each fold for learning a transfer function and the remaining folds for approximations of privileged features—we refer to this as split knowledge transfer. We evaluate the method using four different experimental setups comprising one synthetic and three real datasets. The results indicate that our approach leads to improved accuracy as compared to LUPI with standard knowledge transfer.
APA
Gauraha, N., Söderdahl, F. & Spjuth, O.. (2019). Split knowledge transfer in learning under privileged information framework. Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research 105:43-52 Available from https://proceedings.mlr.press/v105/gauraha19a.html.

Related Material