Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin

Amina Rufai, Afolabi Abeeb, Esther Oduntan, Tayo Arulogun, Oluwabukola Adegboro, Daniel Ajisafe
DLI 2025 Research Track, PMLR 302:1-10, 2026.

Abstract

The prevalence of automatic speech recognition (ASR) systems in spoken language applications has increased significantly in recent years. Notably, many African languages lack sufficient linguistic resources to support the robustness of these systems. This paper focuses on the development of an end-to-end speech recognition system customised for Nigerian Pidgin English. We investigated and evaluated different pretrained state-of-the-art architectures on a new dataset. Our empirical results demonstrate a notable performance of the variant Wav2Vec2 XLSR-53 on our dataset, achieving a word error rate (WER) of 29.6% on the test set, surpassing other architectures such as NeMo QuartzNet and Wav2Vec2.0 Base-100H in quantitative assessments. Additionally, we demonstrate that a pretrained state-of-the-art model does not work well out-of-the-box. We performed zero-shot evaluation using XLSR-English as the baseline, chosen for its similarity to Nigerian Pidgin. This yielded a higher WER of 73.7%. By adapting this architecture to nuances represented in our dataset, we reduce error by 59.84%. Our dataset comprises 4,277 recorded utterances from native speakers, partitioned into training, validation, and test sets. This study underscores the potential for improving ASR systems for under-resourced languages like Nigerian Pidgin English, contributing to greater inclusion in speech technology applications. We publicly release our unique parallel dataset (speech-to-text) on Nigerian Pidgin, as well as the model weights on Hugging Face. Our project and code are made available to foster future research from the community. Keywords: Automatic Speech Recognition, ASR, Nigerian Pidgin English, End-to-End.

Cite this Paper


BibTeX
@InProceedings{pmlr-v302-rufai26a, title = {Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin}, author = {Rufai, Amina and Abeeb, Afolabi and Oduntan, Esther and Arulogun, Tayo and Adegboro, Oluwabukola and Ajisafe, Daniel}, booktitle = {DLI 2025 Research Track}, pages = {1--10}, year = {2026}, editor = {Haddad, Hatem and Kahira, Albert Njoroge and Bourhim, Sofia and Olatunji, Iyiola Emmanuel and Makhafola, Lesego and Mwase, Christine}, volume = {302}, series = {Proceedings of Machine Learning Research}, month = {17--22 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v302/main/assets/rufai26a/rufai26a.pdf}, url = {https://proceedings.mlr.press/v302/rufai26a.html}, abstract = {The prevalence of automatic speech recognition (ASR) systems in spoken language applications has increased significantly in recent years. Notably, many African languages lack sufficient linguistic resources to support the robustness of these systems. This paper focuses on the development of an end-to-end speech recognition system customised for Nigerian Pidgin English. We investigated and evaluated different pretrained state-of-the-art architectures on a new dataset. Our empirical results demonstrate a notable performance of the variant Wav2Vec2 XLSR-53 on our dataset, achieving a word error rate (WER) of 29.6% on the test set, surpassing other architectures such as NeMo QuartzNet and Wav2Vec2.0 Base-100H in quantitative assessments. Additionally, we demonstrate that a pretrained state-of-the-art model does not work well out-of-the-box. We performed zero-shot evaluation using XLSR-English as the baseline, chosen for its similarity to Nigerian Pidgin. This yielded a higher WER of 73.7%. By adapting this architecture to nuances represented in our dataset, we reduce error by 59.84%. Our dataset comprises 4,277 recorded utterances from native speakers, partitioned into training, validation, and test sets. This study underscores the potential for improving ASR systems for under-resourced languages like Nigerian Pidgin English, contributing to greater inclusion in speech technology applications. We publicly release our unique parallel dataset (speech-to-text) on Nigerian Pidgin, as well as the model weights on Hugging Face. Our project and code are made available to foster future research from the community. Keywords: Automatic Speech Recognition, ASR, Nigerian Pidgin English, End-to-End.} }
Endnote
%0 Conference Paper %T Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin %A Amina Rufai %A Afolabi Abeeb %A Esther Oduntan %A Tayo Arulogun %A Oluwabukola Adegboro %A Daniel Ajisafe %B DLI 2025 Research Track %C Proceedings of Machine Learning Research %D 2026 %E Hatem Haddad %E Albert Njoroge Kahira %E Sofia Bourhim %E Iyiola Emmanuel Olatunji %E Lesego Makhafola %E Christine Mwase %F pmlr-v302-rufai26a %I PMLR %P 1--10 %U https://proceedings.mlr.press/v302/rufai26a.html %V 302 %X The prevalence of automatic speech recognition (ASR) systems in spoken language applications has increased significantly in recent years. Notably, many African languages lack sufficient linguistic resources to support the robustness of these systems. This paper focuses on the development of an end-to-end speech recognition system customised for Nigerian Pidgin English. We investigated and evaluated different pretrained state-of-the-art architectures on a new dataset. Our empirical results demonstrate a notable performance of the variant Wav2Vec2 XLSR-53 on our dataset, achieving a word error rate (WER) of 29.6% on the test set, surpassing other architectures such as NeMo QuartzNet and Wav2Vec2.0 Base-100H in quantitative assessments. Additionally, we demonstrate that a pretrained state-of-the-art model does not work well out-of-the-box. We performed zero-shot evaluation using XLSR-English as the baseline, chosen for its similarity to Nigerian Pidgin. This yielded a higher WER of 73.7%. By adapting this architecture to nuances represented in our dataset, we reduce error by 59.84%. Our dataset comprises 4,277 recorded utterances from native speakers, partitioned into training, validation, and test sets. This study underscores the potential for improving ASR systems for under-resourced languages like Nigerian Pidgin English, contributing to greater inclusion in speech technology applications. We publicly release our unique parallel dataset (speech-to-text) on Nigerian Pidgin, as well as the model weights on Hugging Face. Our project and code are made available to foster future research from the community. Keywords: Automatic Speech Recognition, ASR, Nigerian Pidgin English, End-to-End.
APA
Rufai, A., Abeeb, A., Oduntan, E., Arulogun, T., Adegboro, O. & Ajisafe, D.. (2026). Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin. DLI 2025 Research Track, in Proceedings of Machine Learning Research 302:1-10 Available from https://proceedings.mlr.press/v302/rufai26a.html.

Related Material