Video Polyp Segmentation using Implicit Networks

Aviad Dahan, Tal Shaharabany, Raja Giryes, Lior Wolf
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:326-337, 2024.

Abstract

Polyp segmentation in endoscopic videos is an essential task in medical image and video analysis, requiring pixel-level accuracy to accurately identify and localize polyps within the video sequences. Addressing this task unveils the intricate interplay of dynamic changes in the video and the complexities involved in tracking polyps across frames. Our research presents an innovative approach to effectively meet these challenges that integrates, at test time, a pre-trained image (2D) model with a new form of implicit representation. By leveraging the temporal understanding provided by implicit networks and enhancing it with optical flow-based temporal losses, we significantly enhance the precision and consistency of polyp segmentation across sequential frames. Our proposed framework demonstrates excellent performance across various medical benchmarks and datasets, setting a new standard in video polyp segmentation with high spatial and temporal consistency. Our code is publicly available at https://github.com/AviadDahan/VPS-implicit.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-dahan24c, title = {Video Polyp Segmentation using Implicit Networks}, author = {Dahan, Aviad and Shaharabany, Tal and Giryes, Raja and Wolf, Lior}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {326--337}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/dahan24c/dahan24c.pdf}, url = {https://proceedings.mlr.press/v250/dahan24c.html}, abstract = {Polyp segmentation in endoscopic videos is an essential task in medical image and video analysis, requiring pixel-level accuracy to accurately identify and localize polyps within the video sequences. Addressing this task unveils the intricate interplay of dynamic changes in the video and the complexities involved in tracking polyps across frames. Our research presents an innovative approach to effectively meet these challenges that integrates, at test time, a pre-trained image (2D) model with a new form of implicit representation. By leveraging the temporal understanding provided by implicit networks and enhancing it with optical flow-based temporal losses, we significantly enhance the precision and consistency of polyp segmentation across sequential frames. Our proposed framework demonstrates excellent performance across various medical benchmarks and datasets, setting a new standard in video polyp segmentation with high spatial and temporal consistency. Our code is publicly available at https://github.com/AviadDahan/VPS-implicit.} }
Endnote
%0 Conference Paper %T Video Polyp Segmentation using Implicit Networks %A Aviad Dahan %A Tal Shaharabany %A Raja Giryes %A Lior Wolf %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-dahan24c %I PMLR %P 326--337 %U https://proceedings.mlr.press/v250/dahan24c.html %V 250 %X Polyp segmentation in endoscopic videos is an essential task in medical image and video analysis, requiring pixel-level accuracy to accurately identify and localize polyps within the video sequences. Addressing this task unveils the intricate interplay of dynamic changes in the video and the complexities involved in tracking polyps across frames. Our research presents an innovative approach to effectively meet these challenges that integrates, at test time, a pre-trained image (2D) model with a new form of implicit representation. By leveraging the temporal understanding provided by implicit networks and enhancing it with optical flow-based temporal losses, we significantly enhance the precision and consistency of polyp segmentation across sequential frames. Our proposed framework demonstrates excellent performance across various medical benchmarks and datasets, setting a new standard in video polyp segmentation with high spatial and temporal consistency. Our code is publicly available at https://github.com/AviadDahan/VPS-implicit.
APA
Dahan, A., Shaharabany, T., Giryes, R. & Wolf, L.. (2024). Video Polyp Segmentation using Implicit Networks. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:326-337 Available from https://proceedings.mlr.press/v250/dahan24c.html.

Related Material