[edit]
AI Certification and Assessment Catalogues: Practical Use and Challenges in the Context of the European AI Act
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:492-498, 2025.
Abstract
Certifying artificial intelligence (AI) systems remains a complex task, particularly as AI development has moved beyond traditional software paradigms. We investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues, by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model’s compliance with certification standards, focusing on reliability and fairness. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team any more and highlight the importance of complete system documentation. Finally, we identify some limitations of the used certification catalogues and propose ideas on how to streamline the certification process.