DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications

Ibrahim Fayad, Max Zimmer, Martin Schwartz, Fabian Gieseke, Philippe Ciais, Gabriel Belouze, Sarah Brood, Aurélien De Truchis, Alexandre D’Aspremont
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:16375-16406, 2025.

Abstract

Significant efforts have been directed towards adapting self-supervised multimodal learning for Earth observation applications. However, most current methods produce coarse patch-sized embeddings, limiting their effectiveness and integration with other modalities like LiDAR. To close this gap, we present DUNIA, an approach to learn pixel-sized embeddings through cross-modal alignment between images and full-waveform LiDAR data. As the model is trained in a contrastive manner, the embeddings can be directly leveraged in the context of a variety of environmental monitoring tasks in a zero-shot setting. In our experiments, we demonstrate the effectiveness of the embeddings for seven such tasks: canopy height mapping, fractional canopy cover, land cover mapping, tree species identification, plant area index, crop type classification, and per-pixel waveform-based vertical structure mapping. The results show that the embeddings, along with zero-shot classifiers, often outperform specialized supervised models, even in low-data regimes. In the fine-tuning setting, we show strong performances near or better than the state-of-the-art on five out of six tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-fayad25a, title = {{DUNIA}: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications}, author = {Fayad, Ibrahim and Zimmer, Max and Schwartz, Martin and Gieseke, Fabian and Ciais, Philippe and Belouze, Gabriel and Brood, Sarah and De Truchis, Aur\'{e}lien and D'Aspremont, Alexandre}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {16375--16406}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/fayad25a/fayad25a.pdf}, url = {https://proceedings.mlr.press/v267/fayad25a.html}, abstract = {Significant efforts have been directed towards adapting self-supervised multimodal learning for Earth observation applications. However, most current methods produce coarse patch-sized embeddings, limiting their effectiveness and integration with other modalities like LiDAR. To close this gap, we present DUNIA, an approach to learn pixel-sized embeddings through cross-modal alignment between images and full-waveform LiDAR data. As the model is trained in a contrastive manner, the embeddings can be directly leveraged in the context of a variety of environmental monitoring tasks in a zero-shot setting. In our experiments, we demonstrate the effectiveness of the embeddings for seven such tasks: canopy height mapping, fractional canopy cover, land cover mapping, tree species identification, plant area index, crop type classification, and per-pixel waveform-based vertical structure mapping. The results show that the embeddings, along with zero-shot classifiers, often outperform specialized supervised models, even in low-data regimes. In the fine-tuning setting, we show strong performances near or better than the state-of-the-art on five out of six tasks.} }
Endnote
%0 Conference Paper %T DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications %A Ibrahim Fayad %A Max Zimmer %A Martin Schwartz %A Fabian Gieseke %A Philippe Ciais %A Gabriel Belouze %A Sarah Brood %A Aurélien De Truchis %A Alexandre D’Aspremont %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-fayad25a %I PMLR %P 16375--16406 %U https://proceedings.mlr.press/v267/fayad25a.html %V 267 %X Significant efforts have been directed towards adapting self-supervised multimodal learning for Earth observation applications. However, most current methods produce coarse patch-sized embeddings, limiting their effectiveness and integration with other modalities like LiDAR. To close this gap, we present DUNIA, an approach to learn pixel-sized embeddings through cross-modal alignment between images and full-waveform LiDAR data. As the model is trained in a contrastive manner, the embeddings can be directly leveraged in the context of a variety of environmental monitoring tasks in a zero-shot setting. In our experiments, we demonstrate the effectiveness of the embeddings for seven such tasks: canopy height mapping, fractional canopy cover, land cover mapping, tree species identification, plant area index, crop type classification, and per-pixel waveform-based vertical structure mapping. The results show that the embeddings, along with zero-shot classifiers, often outperform specialized supervised models, even in low-data regimes. In the fine-tuning setting, we show strong performances near or better than the state-of-the-art on five out of six tasks.
APA
Fayad, I., Zimmer, M., Schwartz, M., Gieseke, F., Ciais, P., Belouze, G., Brood, S., De Truchis, A. & D’Aspremont, A.. (2025). DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:16375-16406 Available from https://proceedings.mlr.press/v267/fayad25a.html.

Related Material