Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery

Arjun Rao, Esther Rolf
Proceedings of The TerraBytes {ICML} Workshop: Towards global datasets and models for Earth Observation, PMLR 292:166-188, 2025.

Abstract

A large variety of geospatial data layers is available around the world ranging from remotely-sensed raster data like satellite imagery, digital elevation models, predicted land cover maps, and human-annotated data, to data derived from environmental sensors such as air temperature or wind speed data. A large majority of machine learning models trained on satellite imagery (SatML), however, are designed primarily for \emph{optical} input modalities such as multi-spectral satellite imagery. To better understand the value of using other input modalities alongside optical imagery in supervised learning settings, we generate augmented versions of SatML benchmark tasks by appending additional geographic data layers to datasets spanning classification, regression, and segmentation. Using these augmented datasets, we find that fusing additional geographic inputs with optical imagery can significantly improve SatML model performance. Benefits are largest in settings where labeled data are limited and in geographic out-of-sample settings, suggesting that multi-modal inputs may be especially valuable for data-efficiency and out-of-sample performance of SatML models. Surprisingly, we find that hard-coded fusion strategies outperform learned variants, with interesting implications for future work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v292-rao25a, title = {Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for {ML} with Satellite Imagery}, author = {Rao, Arjun and Rolf, Esther}, booktitle = {Proceedings of The TerraBytes {ICML} Workshop: Towards global datasets and models for Earth Observation}, pages = {166--188}, year = {2025}, editor = {Audebert, Nicolas and Azizpour, Hossein and Barrière, Valentin and Castillo Navarro, Javiera and Czerkawski, Mikolaj and Fang, Heng and Francis, Alistair and Marsocci, Valerio and Nascetti, Andrea and Yadav, Ritu}, volume = {292}, series = {Proceedings of Machine Learning Research}, month = {19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v292/main/assets/rao25a/rao25a.pdf}, url = {https://proceedings.mlr.press/v292/rao25a.html}, abstract = {A large variety of geospatial data layers is available around the world ranging from remotely-sensed raster data like satellite imagery, digital elevation models, predicted land cover maps, and human-annotated data, to data derived from environmental sensors such as air temperature or wind speed data. A large majority of machine learning models trained on satellite imagery (SatML), however, are designed primarily for \emph{optical} input modalities such as multi-spectral satellite imagery. To better understand the value of using other input modalities alongside optical imagery in supervised learning settings, we generate augmented versions of SatML benchmark tasks by appending additional geographic data layers to datasets spanning classification, regression, and segmentation. Using these augmented datasets, we find that fusing additional geographic inputs with optical imagery can significantly improve SatML model performance. Benefits are largest in settings where labeled data are limited and in geographic out-of-sample settings, suggesting that multi-modal inputs may be especially valuable for data-efficiency and out-of-sample performance of SatML models. Surprisingly, we find that hard-coded fusion strategies outperform learned variants, with interesting implications for future work.} }
Endnote
%0 Conference Paper %T Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery %A Arjun Rao %A Esther Rolf %B Proceedings of The TerraBytes {ICML} Workshop: Towards global datasets and models for Earth Observation %C Proceedings of Machine Learning Research %D 2025 %E Nicolas Audebert %E Hossein Azizpour %E Valentin Barrière %E Javiera Castillo Navarro %E Mikolaj Czerkawski %E Heng Fang %E Alistair Francis %E Valerio Marsocci %E Andrea Nascetti %E Ritu Yadav %F pmlr-v292-rao25a %I PMLR %P 166--188 %U https://proceedings.mlr.press/v292/rao25a.html %V 292 %X A large variety of geospatial data layers is available around the world ranging from remotely-sensed raster data like satellite imagery, digital elevation models, predicted land cover maps, and human-annotated data, to data derived from environmental sensors such as air temperature or wind speed data. A large majority of machine learning models trained on satellite imagery (SatML), however, are designed primarily for \emph{optical} input modalities such as multi-spectral satellite imagery. To better understand the value of using other input modalities alongside optical imagery in supervised learning settings, we generate augmented versions of SatML benchmark tasks by appending additional geographic data layers to datasets spanning classification, regression, and segmentation. Using these augmented datasets, we find that fusing additional geographic inputs with optical imagery can significantly improve SatML model performance. Benefits are largest in settings where labeled data are limited and in geographic out-of-sample settings, suggesting that multi-modal inputs may be especially valuable for data-efficiency and out-of-sample performance of SatML models. Surprisingly, we find that hard-coded fusion strategies outperform learned variants, with interesting implications for future work.
APA
Rao, A. & Rolf, E.. (2025). Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery. Proceedings of The TerraBytes {ICML} Workshop: Towards global datasets and models for Earth Observation, in Proceedings of Machine Learning Research 292:166-188 Available from https://proceedings.mlr.press/v292/rao25a.html.

Related Material