Deep Edge-Aware Filters

Li Xu, Jimmy Ren, Qiong Yan, Renjie Liao, Jiaya Jia
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1669-1678, 2015.

Abstract

There are many edge-aware filters varying in their construction forms and filtering properties. It seems impossible to uniformly represent and accelerate them in a single framework. We made the attempt to learn a big and important family of edge-aware operators from data. Our method is based on a deep convolutional neural network with a gradient domain training procedure, which gives rise to a powerful tool to approximate various filters without knowing the original models and implementation details. The only difference among these operators in our system becomes merely the learned parameters. Our system enables fast approximation for complex edge-aware filters and achieves up to 200x acceleration, regardless of their originally very different implementation. Fast speed can also be achieved when creating new effects using spatially varying filter or filter combination, bearing out the effectiveness of our deep edge-aware filters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-xub15, title = {Deep Edge-Aware Filters}, author = {Xu, Li and Ren, Jimmy and Yan, Qiong and Liao, Renjie and Jia, Jiaya}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1669--1678}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/xub15.pdf}, url = {https://proceedings.mlr.press/v37/xub15.html}, abstract = {There are many edge-aware filters varying in their construction forms and filtering properties. It seems impossible to uniformly represent and accelerate them in a single framework. We made the attempt to learn a big and important family of edge-aware operators from data. Our method is based on a deep convolutional neural network with a gradient domain training procedure, which gives rise to a powerful tool to approximate various filters without knowing the original models and implementation details. The only difference among these operators in our system becomes merely the learned parameters. Our system enables fast approximation for complex edge-aware filters and achieves up to 200x acceleration, regardless of their originally very different implementation. Fast speed can also be achieved when creating new effects using spatially varying filter or filter combination, bearing out the effectiveness of our deep edge-aware filters.} }
Endnote
%0 Conference Paper %T Deep Edge-Aware Filters %A Li Xu %A Jimmy Ren %A Qiong Yan %A Renjie Liao %A Jiaya Jia %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-xub15 %I PMLR %P 1669--1678 %U https://proceedings.mlr.press/v37/xub15.html %V 37 %X There are many edge-aware filters varying in their construction forms and filtering properties. It seems impossible to uniformly represent and accelerate them in a single framework. We made the attempt to learn a big and important family of edge-aware operators from data. Our method is based on a deep convolutional neural network with a gradient domain training procedure, which gives rise to a powerful tool to approximate various filters without knowing the original models and implementation details. The only difference among these operators in our system becomes merely the learned parameters. Our system enables fast approximation for complex edge-aware filters and achieves up to 200x acceleration, regardless of their originally very different implementation. Fast speed can also be achieved when creating new effects using spatially varying filter or filter combination, bearing out the effectiveness of our deep edge-aware filters.
RIS
TY - CPAPER TI - Deep Edge-Aware Filters AU - Li Xu AU - Jimmy Ren AU - Qiong Yan AU - Renjie Liao AU - Jiaya Jia BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-xub15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1669 EP - 1678 L1 - http://proceedings.mlr.press/v37/xub15.pdf UR - https://proceedings.mlr.press/v37/xub15.html AB - There are many edge-aware filters varying in their construction forms and filtering properties. It seems impossible to uniformly represent and accelerate them in a single framework. We made the attempt to learn a big and important family of edge-aware operators from data. Our method is based on a deep convolutional neural network with a gradient domain training procedure, which gives rise to a powerful tool to approximate various filters without knowing the original models and implementation details. The only difference among these operators in our system becomes merely the learned parameters. Our system enables fast approximation for complex edge-aware filters and achieves up to 200x acceleration, regardless of their originally very different implementation. Fast speed can also be achieved when creating new effects using spatially varying filter or filter combination, bearing out the effectiveness of our deep edge-aware filters. ER -
APA
Xu, L., Ren, J., Yan, Q., Liao, R. & Jia, J.. (2015). Deep Edge-Aware Filters. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1669-1678 Available from https://proceedings.mlr.press/v37/xub15.html.

Related Material