[edit]
Interpretable data-driven model predictive control of building energy systems using SHAP
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:222-234, 2024.
Abstract
Advanced building energy system controls, such as model predictive control, rely on accurate system models. To reduce the modelling effort in the building sector, data-driven models are becoming increasingly popular in research. Despite their promising performance, data-driven models are considered black boxes. This black box nature is an obstacle to widespread application, as it is difficult for building operators to understand how predictions are made. Concepts known as Explainable Artificial Intelligence are being developed to improve the interpretability of black box models. This work combines the popular Explainable Artificial Intelligence method Shapley Additive Explanations (SHAP) with data-driven model predictive control to increase the interpretability of artificial neural networks used as process models during model creation. Using a standardised residual building energy system for controller testing, an in-depth analysis of how the models make predictions is carried out. In addition, the influence of different model setups on the control performance is evaluated. The results show that the different control performances can be justified by analysing the underlying models with SHAP. SHAP shows how the characteristics of a feature affect the prediction and reveals weaknesses in the model. In addition, the features can be sorted according to their influence on the prediction, which is utilized for feature selection.