Benchmarking Automatic Speech Recognition Models for African Languages

Alvin Nahabwe, Sulaiman Kagumire, Denis Musinguzi, Bruno Beijuka, Jonah Kyagaba, Peter Nabende, Andrew Katumba, Joyce Nakatumba-Nabende
DLI 2025 Research Track, PMLR 302:1-19, 2026.

Abstract

Automatic speech recognition (ASR) for African languages remains constrained by limited labeled data and the lack of systematic guidance on model selection, data scaling, and decoding strategies. Large pre-trained systems such as Whisper, XLS-R, MMS, and W2v-BERT have expanded access to ASR technology, but their comparative behavior in African low-resource contexts has not been studied in a unified and systematic way. In this work, we benchmark four state-of-the-art ASR models across 13 African languages, fine-tuning them on progressively larger subsets of transcribed data ranging from 1 to 400 hours. Beyond reporting error rates, we provide new insights into why models behave differently under varying conditions. We show that MMS and W2v-BERT are more data efficient in very low-resource regimes, XLS-R scales more effectively as additional data becomes available, and Whisper demonstrates advantages in mid-resource conditions. We also analyze where external language model decoding yields improvements and identify cases where it plateaus or introduces additional errors, depending on the alignment between acoustic and text resources. By highlighting the interaction between pre-training coverage, model architecture, dataset domain, and resource availability, this study offers practical and insights into the design of ASR systems for underrepresented languages. Keywords: Automatic Speech Recognition, African Languages, Low-Resource ASR, Pre-trained Models, Language Modeling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v302-nahabwe26a, title = {Benchmarking Automatic Speech Recognition Models for African Languages}, author = {Nahabwe, Alvin and Kagumire, Sulaiman and Musinguzi, Denis and Beijuka, Bruno and Kyagaba, Jonah and Nabende, Peter and Katumba, Andrew and Nakatumba-Nabende, Joyce}, booktitle = {DLI 2025 Research Track}, pages = {1--19}, year = {2026}, editor = {Haddad, Hatem and Kahira, Albert Njoroge and Bourhim, Sofia and Olatunji, Iyiola Emmanuel and Makhafola, Lesego and Mwase, Christine}, volume = {302}, series = {Proceedings of Machine Learning Research}, month = {17--22 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v302/main/assets/nahabwe26a/nahabwe26a.pdf}, url = {https://proceedings.mlr.press/v302/nahabwe26a.html}, abstract = {Automatic speech recognition (ASR) for African languages remains constrained by limited labeled data and the lack of systematic guidance on model selection, data scaling, and decoding strategies. Large pre-trained systems such as Whisper, XLS-R, MMS, and W2v-BERT have expanded access to ASR technology, but their comparative behavior in African low-resource contexts has not been studied in a unified and systematic way. In this work, we benchmark four state-of-the-art ASR models across 13 African languages, fine-tuning them on progressively larger subsets of transcribed data ranging from 1 to 400 hours. Beyond reporting error rates, we provide new insights into why models behave differently under varying conditions. We show that MMS and W2v-BERT are more data efficient in very low-resource regimes, XLS-R scales more effectively as additional data becomes available, and Whisper demonstrates advantages in mid-resource conditions. We also analyze where external language model decoding yields improvements and identify cases where it plateaus or introduces additional errors, depending on the alignment between acoustic and text resources. By highlighting the interaction between pre-training coverage, model architecture, dataset domain, and resource availability, this study offers practical and insights into the design of ASR systems for underrepresented languages. Keywords: Automatic Speech Recognition, African Languages, Low-Resource ASR, Pre-trained Models, Language Modeling.} }
Endnote
%0 Conference Paper %T Benchmarking Automatic Speech Recognition Models for African Languages %A Alvin Nahabwe %A Sulaiman Kagumire %A Denis Musinguzi %A Bruno Beijuka %A Jonah Kyagaba %A Peter Nabende %A Andrew Katumba %A Joyce Nakatumba-Nabende %B DLI 2025 Research Track %C Proceedings of Machine Learning Research %D 2026 %E Hatem Haddad %E Albert Njoroge Kahira %E Sofia Bourhim %E Iyiola Emmanuel Olatunji %E Lesego Makhafola %E Christine Mwase %F pmlr-v302-nahabwe26a %I PMLR %P 1--19 %U https://proceedings.mlr.press/v302/nahabwe26a.html %V 302 %X Automatic speech recognition (ASR) for African languages remains constrained by limited labeled data and the lack of systematic guidance on model selection, data scaling, and decoding strategies. Large pre-trained systems such as Whisper, XLS-R, MMS, and W2v-BERT have expanded access to ASR technology, but their comparative behavior in African low-resource contexts has not been studied in a unified and systematic way. In this work, we benchmark four state-of-the-art ASR models across 13 African languages, fine-tuning them on progressively larger subsets of transcribed data ranging from 1 to 400 hours. Beyond reporting error rates, we provide new insights into why models behave differently under varying conditions. We show that MMS and W2v-BERT are more data efficient in very low-resource regimes, XLS-R scales more effectively as additional data becomes available, and Whisper demonstrates advantages in mid-resource conditions. We also analyze where external language model decoding yields improvements and identify cases where it plateaus or introduces additional errors, depending on the alignment between acoustic and text resources. By highlighting the interaction between pre-training coverage, model architecture, dataset domain, and resource availability, this study offers practical and insights into the design of ASR systems for underrepresented languages. Keywords: Automatic Speech Recognition, African Languages, Low-Resource ASR, Pre-trained Models, Language Modeling.
APA
Nahabwe, A., Kagumire, S., Musinguzi, D., Beijuka, B., Kyagaba, J., Nabende, P., Katumba, A. & Nakatumba-Nabende, J.. (2026). Benchmarking Automatic Speech Recognition Models for African Languages. DLI 2025 Research Track, in Proceedings of Machine Learning Research 302:1-19 Available from https://proceedings.mlr.press/v302/nahabwe26a.html.

Related Material