Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals

Clément Bonet, Benoı̂t Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:2777-2805, 2023.

Abstract

When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires the usage of Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices, and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this discrepancy to brain-age prediction from MEG data, and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-bonet23a, title = {Sliced-{W}asserstein on Symmetric Positive Definite Matrices for {M}/{EEG} Signals}, author = {Bonet, Cl\'{e}ment and Mal\'{e}zieux, Beno\^{\i}t and Rakotomamonjy, Alain and Drumetz, Lucas and Moreau, Thomas and Kowalski, Matthieu and Courty, Nicolas}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {2777--2805}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/bonet23a/bonet23a.pdf}, url = {https://proceedings.mlr.press/v202/bonet23a.html}, abstract = {When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires the usage of Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices, and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this discrepancy to brain-age prediction from MEG data, and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.} }
Endnote
%0 Conference Paper %T Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals %A Clément Bonet %A Benoı̂t Malézieux %A Alain Rakotomamonjy %A Lucas Drumetz %A Thomas Moreau %A Matthieu Kowalski %A Nicolas Courty %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-bonet23a %I PMLR %P 2777--2805 %U https://proceedings.mlr.press/v202/bonet23a.html %V 202 %X When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires the usage of Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices, and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this discrepancy to brain-age prediction from MEG data, and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
APA
Bonet, C., Malézieux, B., Rakotomamonjy, A., Drumetz, L., Moreau, T., Kowalski, M. & Courty, N.. (2023). Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:2777-2805 Available from https://proceedings.mlr.press/v202/bonet23a.html.

Related Material