Recursive Sketches for Modular Deep Learning

Badih Ghazi, Rina Panigrahy, Joshua Wang
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2211-2220, 2019.

Abstract

We present a mechanism to compute a sketch (succinct summary) of how a complex modular deep network processes its inputs. The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs. Furthermore, the sketch is recursive and can be unrolled to identify sub-components of these components and so forth, capturing a potentially complicated DAG structure. These sketches erase gracefully; even if we erase a fraction of the sketch at random, the remainder still retains the “high-weight” information present in the original sketch. The sketches can also be organized in a repository to implicitly form a “knowledge graph”; it is possible to quickly retrieve sketches in the repository that are related to a sketch of interest; arranged in this fashion, the sketches can also be used to learn emerging concepts by looking for new clusters in sketch space. Finally, in the scenario where we want to learn a ground truth deep network, we show that augmenting input/output pairs with these sketches can theoretically make it easier to do so.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ghazi19a, title = {Recursive Sketches for Modular Deep Learning}, author = {Ghazi, Badih and Panigrahy, Rina and Wang, Joshua}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2211--2220}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ghazi19a/ghazi19a.pdf}, url = {https://proceedings.mlr.press/v97/ghazi19a.html}, abstract = {We present a mechanism to compute a sketch (succinct summary) of how a complex modular deep network processes its inputs. The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs. Furthermore, the sketch is recursive and can be unrolled to identify sub-components of these components and so forth, capturing a potentially complicated DAG structure. These sketches erase gracefully; even if we erase a fraction of the sketch at random, the remainder still retains the “high-weight” information present in the original sketch. The sketches can also be organized in a repository to implicitly form a “knowledge graph”; it is possible to quickly retrieve sketches in the repository that are related to a sketch of interest; arranged in this fashion, the sketches can also be used to learn emerging concepts by looking for new clusters in sketch space. Finally, in the scenario where we want to learn a ground truth deep network, we show that augmenting input/output pairs with these sketches can theoretically make it easier to do so.} }
Endnote
%0 Conference Paper %T Recursive Sketches for Modular Deep Learning %A Badih Ghazi %A Rina Panigrahy %A Joshua Wang %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ghazi19a %I PMLR %P 2211--2220 %U https://proceedings.mlr.press/v97/ghazi19a.html %V 97 %X We present a mechanism to compute a sketch (succinct summary) of how a complex modular deep network processes its inputs. The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs. Furthermore, the sketch is recursive and can be unrolled to identify sub-components of these components and so forth, capturing a potentially complicated DAG structure. These sketches erase gracefully; even if we erase a fraction of the sketch at random, the remainder still retains the “high-weight” information present in the original sketch. The sketches can also be organized in a repository to implicitly form a “knowledge graph”; it is possible to quickly retrieve sketches in the repository that are related to a sketch of interest; arranged in this fashion, the sketches can also be used to learn emerging concepts by looking for new clusters in sketch space. Finally, in the scenario where we want to learn a ground truth deep network, we show that augmenting input/output pairs with these sketches can theoretically make it easier to do so.
APA
Ghazi, B., Panigrahy, R. & Wang, J.. (2019). Recursive Sketches for Modular Deep Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2211-2220 Available from https://proceedings.mlr.press/v97/ghazi19a.html.

Related Material