Measuring Interpretability of Neural Policies of Robots with Disentangled Representation

Tsun-Hsuan Wang, Wei Xiao, Tim Seyde, Ramin Hasani, Daniela Rus
Proceedings of The 7th Conference on Robot Learning, PMLR 229:602-641, 2023.

Abstract

The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning. Understanding how learning-based controllers make decisions is crucial since robots are mostly safety-critical systems. This urges a formal and quantitative understanding of the explanatory factors in the interpretability of robot learning. In this paper, we aim to study interpretability of compact neural policies through the lens of disentangled representation. We leverage decision trees to obtain factors of variation [1] for disentanglement in robot learning; these encapsulate skills, behaviors, or strategies toward solving tasks. To assess how well networks uncover the underlying task dynamics, we introduce interpretability metrics that measure disentanglement of learned neural dynamics from a concentration of decisions, mutual information and modularity perspective. We showcase the effectiveness of the connection between interpretability and disentanglement consistently across extensive experimental analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-wang23c, title = {Measuring Interpretability of Neural Policies of Robots with Disentangled Representation}, author = {Wang, Tsun-Hsuan and Xiao, Wei and Seyde, Tim and Hasani, Ramin and Rus, Daniela}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {602--641}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/wang23c/wang23c.pdf}, url = {https://proceedings.mlr.press/v229/wang23c.html}, abstract = {The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning. Understanding how learning-based controllers make decisions is crucial since robots are mostly safety-critical systems. This urges a formal and quantitative understanding of the explanatory factors in the interpretability of robot learning. In this paper, we aim to study interpretability of compact neural policies through the lens of disentangled representation. We leverage decision trees to obtain factors of variation [1] for disentanglement in robot learning; these encapsulate skills, behaviors, or strategies toward solving tasks. To assess how well networks uncover the underlying task dynamics, we introduce interpretability metrics that measure disentanglement of learned neural dynamics from a concentration of decisions, mutual information and modularity perspective. We showcase the effectiveness of the connection between interpretability and disentanglement consistently across extensive experimental analysis.} }
Endnote
%0 Conference Paper %T Measuring Interpretability of Neural Policies of Robots with Disentangled Representation %A Tsun-Hsuan Wang %A Wei Xiao %A Tim Seyde %A Ramin Hasani %A Daniela Rus %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-wang23c %I PMLR %P 602--641 %U https://proceedings.mlr.press/v229/wang23c.html %V 229 %X The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning. Understanding how learning-based controllers make decisions is crucial since robots are mostly safety-critical systems. This urges a formal and quantitative understanding of the explanatory factors in the interpretability of robot learning. In this paper, we aim to study interpretability of compact neural policies through the lens of disentangled representation. We leverage decision trees to obtain factors of variation [1] for disentanglement in robot learning; these encapsulate skills, behaviors, or strategies toward solving tasks. To assess how well networks uncover the underlying task dynamics, we introduce interpretability metrics that measure disentanglement of learned neural dynamics from a concentration of decisions, mutual information and modularity perspective. We showcase the effectiveness of the connection between interpretability and disentanglement consistently across extensive experimental analysis.
APA
Wang, T., Xiao, W., Seyde, T., Hasani, R. & Rus, D.. (2023). Measuring Interpretability of Neural Policies of Robots with Disentangled Representation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:602-641 Available from https://proceedings.mlr.press/v229/wang23c.html.

Related Material