Improving Molecular Design by Stochastic Iterative Target Augmentation

Kevin Yang, Wengong Jin, Kyle Swanson, Dr.Regina Barzilay, Tommi Jaakkola
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10716-10726, 2020.

Abstract

Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain. Finally, we show that our approach is useful in other domains as well, such as program synthesis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-yang20e, title = {Improving Molecular Design by Stochastic Iterative Target Augmentation}, author = {Yang, Kevin and Jin, Wengong and Swanson, Kyle and Barzilay, Dr.Regina and Jaakkola, Tommi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10716--10726}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/yang20e/yang20e.pdf}, url = {https://proceedings.mlr.press/v119/yang20e.html}, abstract = {Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain. Finally, we show that our approach is useful in other domains as well, such as program synthesis.} }
Endnote
%0 Conference Paper %T Improving Molecular Design by Stochastic Iterative Target Augmentation %A Kevin Yang %A Wengong Jin %A Kyle Swanson %A Dr.Regina Barzilay %A Tommi Jaakkola %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-yang20e %I PMLR %P 10716--10726 %U https://proceedings.mlr.press/v119/yang20e.html %V 119 %X Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain. Finally, we show that our approach is useful in other domains as well, such as program synthesis.
APA
Yang, K., Jin, W., Swanson, K., Barzilay, D. & Jaakkola, T.. (2020). Improving Molecular Design by Stochastic Iterative Target Augmentation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10716-10726 Available from https://proceedings.mlr.press/v119/yang20e.html.

Related Material