How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies

Piotr Kubaty, Bartosz Wójcik, Bartłomiej Tomasz Krzepkowski, Monika Michaluk, Tomasz Trzcinski, Jary Pomponi, Kamil Adamczewski
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31821-31840, 2025.

Abstract

Early exits enable the network’s forward pass to terminate early by attaching trainable internal classifiers to the backbone network. Existing early-exit methods typically adopt either a joint training approach, where the backbone and exit heads are trained simultaneously, or a disjoint approach, where the heads are trained separately. However, the implications of this choice are often overlooked, with studies typically adopting one approach without adequate justification. This choice influences training dynamics and its impact remains largely unexplored. In this paper, we introduce a set of metrics to analyze early-exit training dynamics and guide the choice of training strategy. We demonstrate that conventionally used joint and disjoint regimes yield suboptimal performance. To address these limitations, we propose a mixed training strategy: the backbone is trained first, followed by the training of the entire multi-exit network. Through comprehensive evaluations of training strategies across various architectures, datasets, and early-exit methods we present strengths and weaknesses of the early exit training strategies. In particular, we show consistent improvements in performance and efficiency using the proposed mixed strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kubaty25a, title = {How to Train Your Multi-Exit Model? {A}nalyzing the Impact of Training Strategies}, author = {Kubaty, Piotr and W\'{o}jcik, Bartosz and Krzepkowski, Bart{\l}omiej Tomasz and Michaluk, Monika and Trzcinski, Tomasz and Pomponi, Jary and Adamczewski, Kamil}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {31821--31840}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kubaty25a/kubaty25a.pdf}, url = {https://proceedings.mlr.press/v267/kubaty25a.html}, abstract = {Early exits enable the network’s forward pass to terminate early by attaching trainable internal classifiers to the backbone network. Existing early-exit methods typically adopt either a joint training approach, where the backbone and exit heads are trained simultaneously, or a disjoint approach, where the heads are trained separately. However, the implications of this choice are often overlooked, with studies typically adopting one approach without adequate justification. This choice influences training dynamics and its impact remains largely unexplored. In this paper, we introduce a set of metrics to analyze early-exit training dynamics and guide the choice of training strategy. We demonstrate that conventionally used joint and disjoint regimes yield suboptimal performance. To address these limitations, we propose a mixed training strategy: the backbone is trained first, followed by the training of the entire multi-exit network. Through comprehensive evaluations of training strategies across various architectures, datasets, and early-exit methods we present strengths and weaknesses of the early exit training strategies. In particular, we show consistent improvements in performance and efficiency using the proposed mixed strategy.} }
Endnote
%0 Conference Paper %T How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies %A Piotr Kubaty %A Bartosz Wójcik %A Bartłomiej Tomasz Krzepkowski %A Monika Michaluk %A Tomasz Trzcinski %A Jary Pomponi %A Kamil Adamczewski %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kubaty25a %I PMLR %P 31821--31840 %U https://proceedings.mlr.press/v267/kubaty25a.html %V 267 %X Early exits enable the network’s forward pass to terminate early by attaching trainable internal classifiers to the backbone network. Existing early-exit methods typically adopt either a joint training approach, where the backbone and exit heads are trained simultaneously, or a disjoint approach, where the heads are trained separately. However, the implications of this choice are often overlooked, with studies typically adopting one approach without adequate justification. This choice influences training dynamics and its impact remains largely unexplored. In this paper, we introduce a set of metrics to analyze early-exit training dynamics and guide the choice of training strategy. We demonstrate that conventionally used joint and disjoint regimes yield suboptimal performance. To address these limitations, we propose a mixed training strategy: the backbone is trained first, followed by the training of the entire multi-exit network. Through comprehensive evaluations of training strategies across various architectures, datasets, and early-exit methods we present strengths and weaknesses of the early exit training strategies. In particular, we show consistent improvements in performance and efficiency using the proposed mixed strategy.
APA
Kubaty, P., Wójcik, B., Krzepkowski, B.T., Michaluk, M., Trzcinski, T., Pomponi, J. & Adamczewski, K.. (2025). How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:31821-31840 Available from https://proceedings.mlr.press/v267/kubaty25a.html.

Related Material