Ensembling improves stability and power of feature selection for deep learning models

Prashnna K. Gyawali, Xiaoxia Liu, James Zou, Zihuai He
Proceedings of the 17th Machine Learning in Computational Biology meeting, PMLR 200:33-45, 2022.

Abstract

With the growing adoption of deep learning models in different real-world domains, including computational biology, it is often necessary to understand which data features are essential for the model’s decision. Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable. This results in varied explanations or selections of different features across different runs of the model. We demonstrate how the signal strength of features and correlation among features directly contribute to this instability. To address this instability, we explore the ensembling of feature importance scores of models across different epochs and find that this simple approach can substantially address this issue. For example, we consider knockoff inference as they allow feature selection with statistical guarantees. We discover considerable variability in selected features in different epochs of deep learning training, and the best selection of features doesn’t necessarily occur at the lowest validation loss, the conventional approach to determine the best model. As such, we present a framework to combine the feature importance of trained models across different hyperparameter settings and epochs, and instead of selecting features from one best model, we perform an ensemble of feature importance scores from numerous good models. Across the range of experiments in simulated and various real-world datasets from the biological domain, we demonstrate that the proposed framework consistently improves the power of feature selection.

Cite this Paper


BibTeX
@InProceedings{pmlr-v200-gyawali22a, title = {Ensembling improves stability and power of feature selection for deep learning models}, author = {Gyawali, Prashnna K. and Liu, Xiaoxia and Zou, James and He, Zihuai}, booktitle = {Proceedings of the 17th Machine Learning in Computational Biology meeting}, pages = {33--45}, year = {2022}, editor = {Knowles, David A and Mostafavi, Sara and Lee, Su-In}, volume = {200}, series = {Proceedings of Machine Learning Research}, month = {21--22 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v200/gyawali22a/gyawali22a.pdf}, url = {https://proceedings.mlr.press/v200/gyawali22a.html}, abstract = {With the growing adoption of deep learning models in different real-world domains, including computational biology, it is often necessary to understand which data features are essential for the model’s decision. Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable. This results in varied explanations or selections of different features across different runs of the model. We demonstrate how the signal strength of features and correlation among features directly contribute to this instability. To address this instability, we explore the ensembling of feature importance scores of models across different epochs and find that this simple approach can substantially address this issue. For example, we consider knockoff inference as they allow feature selection with statistical guarantees. We discover considerable variability in selected features in different epochs of deep learning training, and the best selection of features doesn’t necessarily occur at the lowest validation loss, the conventional approach to determine the best model. As such, we present a framework to combine the feature importance of trained models across different hyperparameter settings and epochs, and instead of selecting features from one best model, we perform an ensemble of feature importance scores from numerous good models. Across the range of experiments in simulated and various real-world datasets from the biological domain, we demonstrate that the proposed framework consistently improves the power of feature selection.} }
Endnote
%0 Conference Paper %T Ensembling improves stability and power of feature selection for deep learning models %A Prashnna K. Gyawali %A Xiaoxia Liu %A James Zou %A Zihuai He %B Proceedings of the 17th Machine Learning in Computational Biology meeting %C Proceedings of Machine Learning Research %D 2022 %E David A Knowles %E Sara Mostafavi %E Su-In Lee %F pmlr-v200-gyawali22a %I PMLR %P 33--45 %U https://proceedings.mlr.press/v200/gyawali22a.html %V 200 %X With the growing adoption of deep learning models in different real-world domains, including computational biology, it is often necessary to understand which data features are essential for the model’s decision. Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable. This results in varied explanations or selections of different features across different runs of the model. We demonstrate how the signal strength of features and correlation among features directly contribute to this instability. To address this instability, we explore the ensembling of feature importance scores of models across different epochs and find that this simple approach can substantially address this issue. For example, we consider knockoff inference as they allow feature selection with statistical guarantees. We discover considerable variability in selected features in different epochs of deep learning training, and the best selection of features doesn’t necessarily occur at the lowest validation loss, the conventional approach to determine the best model. As such, we present a framework to combine the feature importance of trained models across different hyperparameter settings and epochs, and instead of selecting features from one best model, we perform an ensemble of feature importance scores from numerous good models. Across the range of experiments in simulated and various real-world datasets from the biological domain, we demonstrate that the proposed framework consistently improves the power of feature selection.
APA
Gyawali, P.K., Liu, X., Zou, J. & He, Z.. (2022). Ensembling improves stability and power of feature selection for deep learning models. Proceedings of the 17th Machine Learning in Computational Biology meeting, in Proceedings of Machine Learning Research 200:33-45 Available from https://proceedings.mlr.press/v200/gyawali22a.html.

Related Material