Bayesian Program Learning by Decompiling Amortized Knowledge

Alessandro B. Palmarini, Christopher G. Lucas, Siddharth N
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:39042-39055, 2024.

Abstract

DreamCoder is an inductive program synthesis system that, whilst solving problems, learns to simplify search in an iterative wake-sleep procedure. The cost of search is amortized by training a neural search policy, reducing search breadth and effectively "compiling" useful information to compose program solutions across tasks. Additionally, a library of program components is learnt to compress and express discovered solutions in fewer components, reducing search depth. We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components. This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth. We integrate our approach with DreamCoder and demonstrate faster domain proficiency with improved generalization on a range of domains, particularly when fewer example solutions are available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-palmarini24a, title = {{B}ayesian Program Learning by Decompiling Amortized Knowledge}, author = {Palmarini, Alessandro B. and Lucas, Christopher G. and N, Siddharth}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {39042--39055}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/palmarini24a/palmarini24a.pdf}, url = {https://proceedings.mlr.press/v235/palmarini24a.html}, abstract = {DreamCoder is an inductive program synthesis system that, whilst solving problems, learns to simplify search in an iterative wake-sleep procedure. The cost of search is amortized by training a neural search policy, reducing search breadth and effectively "compiling" useful information to compose program solutions across tasks. Additionally, a library of program components is learnt to compress and express discovered solutions in fewer components, reducing search depth. We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components. This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth. We integrate our approach with DreamCoder and demonstrate faster domain proficiency with improved generalization on a range of domains, particularly when fewer example solutions are available.} }
Endnote
%0 Conference Paper %T Bayesian Program Learning by Decompiling Amortized Knowledge %A Alessandro B. Palmarini %A Christopher G. Lucas %A Siddharth N %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-palmarini24a %I PMLR %P 39042--39055 %U https://proceedings.mlr.press/v235/palmarini24a.html %V 235 %X DreamCoder is an inductive program synthesis system that, whilst solving problems, learns to simplify search in an iterative wake-sleep procedure. The cost of search is amortized by training a neural search policy, reducing search breadth and effectively "compiling" useful information to compose program solutions across tasks. Additionally, a library of program components is learnt to compress and express discovered solutions in fewer components, reducing search depth. We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components. This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth. We integrate our approach with DreamCoder and demonstrate faster domain proficiency with improved generalization on a range of domains, particularly when fewer example solutions are available.
APA
Palmarini, A.B., Lucas, C.G. & N, S.. (2024). Bayesian Program Learning by Decompiling Amortized Knowledge. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:39042-39055 Available from https://proceedings.mlr.press/v235/palmarini24a.html.

Related Material