Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light

Eunah Jung, Nan Yang, Daniel Cremers
; Proceedings of the Conference on Robot Learning, PMLR 100:651-660, 2020.

Abstract

We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions. We base our method on an invertible adversarial network to transfer the beneficial features of brightly illuminated scenes to the sequence in poor illumination without costly paired datasets. In order to preserve the coherent geometric cues for the translated sequence, we present a novel network architecture as well as a novel loss term combining temporal and stereo consistencies based on optical flow estimation. We demonstrate that the enhanced sequences improve the performance of state-of-the-art feature-based and direct stereo visual odometry methods on both synthetic and real datasets in challenging illumination. We also show that MFGAN outperforms other state-of-the-art image enhancement and style transfer methods by a large margin in terms of visual odometry.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-jung20a, title = {Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light}, author = {Jung, Eunah and Yang, Nan and Cremers, Daniel}, pages = {651--660}, year = {2020}, editor = {Leslie Pack Kaelbling and Danica Kragic and Komei Sugiura}, volume = {100}, series = {Proceedings of Machine Learning Research}, address = {}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/jung20a/jung20a.pdf}, url = {http://proceedings.mlr.press/v100/jung20a.html}, abstract = {We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions. We base our method on an invertible adversarial network to transfer the beneficial features of brightly illuminated scenes to the sequence in poor illumination without costly paired datasets. In order to preserve the coherent geometric cues for the translated sequence, we present a novel network architecture as well as a novel loss term combining temporal and stereo consistencies based on optical flow estimation. We demonstrate that the enhanced sequences improve the performance of state-of-the-art feature-based and direct stereo visual odometry methods on both synthetic and real datasets in challenging illumination. We also show that MFGAN outperforms other state-of-the-art image enhancement and style transfer methods by a large margin in terms of visual odometry.} }
Endnote
%0 Conference Paper %T Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light %A Eunah Jung %A Nan Yang %A Daniel Cremers %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-jung20a %I PMLR %J Proceedings of Machine Learning Research %P 651--660 %U http://proceedings.mlr.press %V 100 %W PMLR %X We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions. We base our method on an invertible adversarial network to transfer the beneficial features of brightly illuminated scenes to the sequence in poor illumination without costly paired datasets. In order to preserve the coherent geometric cues for the translated sequence, we present a novel network architecture as well as a novel loss term combining temporal and stereo consistencies based on optical flow estimation. We demonstrate that the enhanced sequences improve the performance of state-of-the-art feature-based and direct stereo visual odometry methods on both synthetic and real datasets in challenging illumination. We also show that MFGAN outperforms other state-of-the-art image enhancement and style transfer methods by a large margin in terms of visual odometry.
APA
Jung, E., Yang, N. & Cremers, D.. (2020). Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light. Proceedings of the Conference on Robot Learning, in PMLR 100:651-660

Related Material