Towards Unified Multi-granularity Text Detection with Interactive Attention

Xingyu Wan, Chengquan Zhang, Pengyuan Lyu, Sen Fan, Zihan Ni, Kun Yao, Errui Ding, Jingdong Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:50012-50025, 2024.

Abstract

Existing OCR engines or document image analysis systems typically rely on training separate models for text detection in varying scenarios and granularities, leading to significant computational complexity and resource demands. In this paper, we introduce "Detect Any Text" (DAT), an advanced paradigm that seamlessly unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model. This design enables DAT to efficiently manage text instances at different granularities, including word, line, paragraph and page. A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances at varying granularities by correlating structural information across different text queries. As a result, it enables the model to achieve mutually beneficial detection performances across multiple text granularities. Additionally, a prompt-based segmentation module refines detection outcomes for texts of arbitrary curvature and complex layouts, thereby improving DAT’s accuracy and expanding its real-world applicability. Experimental results demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks, including multi-oriented/arbitrarily-shaped scene text detection, document layout analysis and page detection tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wan24i, title = {Towards Unified Multi-granularity Text Detection with Interactive Attention}, author = {Wan, Xingyu and Zhang, Chengquan and Lyu, Pengyuan and Fan, Sen and Ni, Zihan and Yao, Kun and Ding, Errui and Wang, Jingdong}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {50012--50025}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wan24i/wan24i.pdf}, url = {https://proceedings.mlr.press/v235/wan24i.html}, abstract = {Existing OCR engines or document image analysis systems typically rely on training separate models for text detection in varying scenarios and granularities, leading to significant computational complexity and resource demands. In this paper, we introduce "Detect Any Text" (DAT), an advanced paradigm that seamlessly unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model. This design enables DAT to efficiently manage text instances at different granularities, including word, line, paragraph and page. A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances at varying granularities by correlating structural information across different text queries. As a result, it enables the model to achieve mutually beneficial detection performances across multiple text granularities. Additionally, a prompt-based segmentation module refines detection outcomes for texts of arbitrary curvature and complex layouts, thereby improving DAT’s accuracy and expanding its real-world applicability. Experimental results demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks, including multi-oriented/arbitrarily-shaped scene text detection, document layout analysis and page detection tasks.} }
Endnote
%0 Conference Paper %T Towards Unified Multi-granularity Text Detection with Interactive Attention %A Xingyu Wan %A Chengquan Zhang %A Pengyuan Lyu %A Sen Fan %A Zihan Ni %A Kun Yao %A Errui Ding %A Jingdong Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wan24i %I PMLR %P 50012--50025 %U https://proceedings.mlr.press/v235/wan24i.html %V 235 %X Existing OCR engines or document image analysis systems typically rely on training separate models for text detection in varying scenarios and granularities, leading to significant computational complexity and resource demands. In this paper, we introduce "Detect Any Text" (DAT), an advanced paradigm that seamlessly unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model. This design enables DAT to efficiently manage text instances at different granularities, including word, line, paragraph and page. A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances at varying granularities by correlating structural information across different text queries. As a result, it enables the model to achieve mutually beneficial detection performances across multiple text granularities. Additionally, a prompt-based segmentation module refines detection outcomes for texts of arbitrary curvature and complex layouts, thereby improving DAT’s accuracy and expanding its real-world applicability. Experimental results demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks, including multi-oriented/arbitrarily-shaped scene text detection, document layout analysis and page detection tasks.
APA
Wan, X., Zhang, C., Lyu, P., Fan, S., Ni, Z., Yao, K., Ding, E. & Wang, J.. (2024). Towards Unified Multi-granularity Text Detection with Interactive Attention. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:50012-50025 Available from https://proceedings.mlr.press/v235/wan24i.html.

Related Material