AlleNoise - large-scale text classification benchmark dataset with real-world label noise

Alicja Rączkowska, Aleksandra Osowska-Kurczab, Jacek Szczerbiński, Kalina Jasinska-Kobus, Klaudia Nazarko
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5113-5121, 2025.

Abstract

Label noise remains a challenge for training robust classification models. Most methods for mitigating label noise have been benchmarked using primarily datasets with synthetic noise. While the need for datasets with realistic noise distribution has partially been addressed by web-scraped benchmarks such as WebVision and Clothing1M, those benchmarks are restricted to the computer vision domain. With the growing importance of Transformer-based models, it is crucial to establish text classification benchmarks for learning with noisy labels. In this paper, we present AlleNoise, a new curated text classification dataset with real-world instance-dependent label noise, containing over 500,000 examples across approximately 5600 classes, complemented with a meaningful, hierarchical taxonomy of categories. The noise distribution comes from actual users of a major e-commerce marketplace, so it realistically reflects the semantics of human mistakes. In addition to the noisy labels, we provide human-verified clean labels, which help to get a deeper insight into the noise distribution, unlike web-scraped datasets typically used in the field. We demonstrate that a representative selection of established methods for learning with noisy labels is inadequate to handle such real-world noise. In addition, we show evidence that these algorithms do not alleviate excessive memorization. As such, with AlleNoise, we set a high bar for the development of label noise methods that can handle real-world label noise in text classification tasks. The code and dataset are available for download at \url{https://github.com/allegro/AlleNoise.}

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-raczkowska25a, title = {AlleNoise - large-scale text classification benchmark dataset with real-world label noise}, author = {R{\k{a}}czkowska, Alicja and Osowska-Kurczab, Aleksandra and Szczerbi{\'n}ski, Jacek and Jasinska-Kobus, Kalina and Nazarko, Klaudia}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5113--5121}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/raczkowska25a/raczkowska25a.pdf}, url = {https://proceedings.mlr.press/v258/raczkowska25a.html}, abstract = {Label noise remains a challenge for training robust classification models. Most methods for mitigating label noise have been benchmarked using primarily datasets with synthetic noise. While the need for datasets with realistic noise distribution has partially been addressed by web-scraped benchmarks such as WebVision and Clothing1M, those benchmarks are restricted to the computer vision domain. With the growing importance of Transformer-based models, it is crucial to establish text classification benchmarks for learning with noisy labels. In this paper, we present AlleNoise, a new curated text classification dataset with real-world instance-dependent label noise, containing over 500,000 examples across approximately 5600 classes, complemented with a meaningful, hierarchical taxonomy of categories. The noise distribution comes from actual users of a major e-commerce marketplace, so it realistically reflects the semantics of human mistakes. In addition to the noisy labels, we provide human-verified clean labels, which help to get a deeper insight into the noise distribution, unlike web-scraped datasets typically used in the field. We demonstrate that a representative selection of established methods for learning with noisy labels is inadequate to handle such real-world noise. In addition, we show evidence that these algorithms do not alleviate excessive memorization. As such, with AlleNoise, we set a high bar for the development of label noise methods that can handle real-world label noise in text classification tasks. The code and dataset are available for download at \url{https://github.com/allegro/AlleNoise.}} }
Endnote
%0 Conference Paper %T AlleNoise - large-scale text classification benchmark dataset with real-world label noise %A Alicja Rączkowska %A Aleksandra Osowska-Kurczab %A Jacek Szczerbiński %A Kalina Jasinska-Kobus %A Klaudia Nazarko %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-raczkowska25a %I PMLR %P 5113--5121 %U https://proceedings.mlr.press/v258/raczkowska25a.html %V 258 %X Label noise remains a challenge for training robust classification models. Most methods for mitigating label noise have been benchmarked using primarily datasets with synthetic noise. While the need for datasets with realistic noise distribution has partially been addressed by web-scraped benchmarks such as WebVision and Clothing1M, those benchmarks are restricted to the computer vision domain. With the growing importance of Transformer-based models, it is crucial to establish text classification benchmarks for learning with noisy labels. In this paper, we present AlleNoise, a new curated text classification dataset with real-world instance-dependent label noise, containing over 500,000 examples across approximately 5600 classes, complemented with a meaningful, hierarchical taxonomy of categories. The noise distribution comes from actual users of a major e-commerce marketplace, so it realistically reflects the semantics of human mistakes. In addition to the noisy labels, we provide human-verified clean labels, which help to get a deeper insight into the noise distribution, unlike web-scraped datasets typically used in the field. We demonstrate that a representative selection of established methods for learning with noisy labels is inadequate to handle such real-world noise. In addition, we show evidence that these algorithms do not alleviate excessive memorization. As such, with AlleNoise, we set a high bar for the development of label noise methods that can handle real-world label noise in text classification tasks. The code and dataset are available for download at \url{https://github.com/allegro/AlleNoise.}
APA
Rączkowska, A., Osowska-Kurczab, A., Szczerbiński, J., Jasinska-Kobus, K. & Nazarko, K.. (2025). AlleNoise - large-scale text classification benchmark dataset with real-world label noise. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5113-5121 Available from https://proceedings.mlr.press/v258/raczkowska25a.html.

Related Material