AI Certification and Assessment Catalogues: Practical Use and Challenges in the Context of the European AI Act

Gregor Autischer, Kerstin Waxnegger, Dominik Kowald
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:492-498, 2025.

Abstract

Certifying artificial intelligence (AI) systems remains a complex task, particularly as AI development has moved beyond traditional software paradigms. We investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues, by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model’s compliance with certification standards, focusing on reliability and fairness. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team any more and highlight the importance of complete system documentation. Finally, we identify some limitations of the used certification catalogues and propose ideas on how to streamline the certification process.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-autischer25a, title = {AI Certification and Assessment Catalogues: Practical Use and Challenges in the Context of the European AI Act}, author = {Autischer, Gregor and Waxnegger, Kerstin and Kowald, Dominik}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {492--498}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and Corrêa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/autischer25a/autischer25a.pdf}, url = {https://proceedings.mlr.press/v294/autischer25a.html}, abstract = {Certifying artificial intelligence (AI) systems remains a complex task, particularly as AI development has moved beyond traditional software paradigms. We investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues, by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model’s compliance with certification standards, focusing on reliability and fairness. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team any more and highlight the importance of complete system documentation. Finally, we identify some limitations of the used certification catalogues and propose ideas on how to streamline the certification process.} }
Endnote
%0 Conference Paper %T AI Certification and Assessment Catalogues: Practical Use and Challenges in the Context of the European AI Act %A Gregor Autischer %A Kerstin Waxnegger %A Dominik Kowald %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria Corrêa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-autischer25a %I PMLR %P 492--498 %U https://proceedings.mlr.press/v294/autischer25a.html %V 294 %X Certifying artificial intelligence (AI) systems remains a complex task, particularly as AI development has moved beyond traditional software paradigms. We investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues, by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model’s compliance with certification standards, focusing on reliability and fairness. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team any more and highlight the importance of complete system documentation. Finally, we identify some limitations of the used certification catalogues and propose ideas on how to streamline the certification process.
APA
Autischer, G., Waxnegger, K. & Kowald, D.. (2025). AI Certification and Assessment Catalogues: Practical Use and Challenges in the Context of the European AI Act. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:492-498 Available from https://proceedings.mlr.press/v294/autischer25a.html.

Related Material