Enzyme Activity Prediction of Sequence Variants on Novel Substrates using Improved Substrate Encodings and Convolutional Pooling

Zhiqing Xu, Jinghao Wu, Yun S. Song, Radhakrishnan Mahadevan
Proceedings of the 16th Machine Learning in Computational Biology meeting, PMLR 165:78-87, 2022.

Abstract

Protein engineering is currently being revolutionized by deep learning applications, especially through natural language processing (NLP) techniques. It has been shown that state-of-the-art self-supervised language models trained on entire protein databases capture hidden contextual and structural information in amino acid sequences and are capable of improving sequence-to-function predictions. Yet, recent studies have reported that current compound-protein modeling approaches perform poorly on learning interactions between enzymes and substrates of interest within one protein family. We attribute this to low-grade substrate encoding methods and overcompressed sequence representations received by downstream predictive models. In this study, we propose a new substrate-encoding based on Extended Connectivity Fingerprints (ECFPs) and a convolutional-pooling of the sequence embeddings. Through testing on an activity profiling dataset of haloalkanoate dehalogenase superfamily that measures activities of 218 phosphatases against 168 substrates, we show substantial improvements in predictive performances of compound-protein interaction modeling. In addition, we also test the workflow on three other datasets from the halogenase, kinase and aminotransferase families and show that our pipeline achieves good performance on these datasets as well. We further demonstrate the utility of this downstream model architecture by showing that it achieves good performance with six different protein embeddings, including ESM-1b, TAPE, ProtBert, ProtAlbert, ProtT5, and ProtXLNet. This study provides a new workflow for activity prediction on novel substrates that can be used to engineer new enzymes for sustainability applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v165-xu22a, title = {Enzyme Activity Prediction of Sequence Variants on Novel Substrates using Improved Substrate Encodings and Convolutional Pooling}, author = {Xu, Zhiqing and Wu, Jinghao and Song, Yun S. and Mahadevan, Radhakrishnan}, booktitle = {Proceedings of the 16th Machine Learning in Computational Biology meeting}, pages = {78--87}, year = {2022}, editor = {Knowles, David A. and Mostafavi, Sara and Lee, Su-In}, volume = {165}, series = {Proceedings of Machine Learning Research}, month = {22--23 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v165/xu22a/xu22a.pdf}, url = {https://proceedings.mlr.press/v165/xu22a.html}, abstract = {Protein engineering is currently being revolutionized by deep learning applications, especially through natural language processing (NLP) techniques. It has been shown that state-of-the-art self-supervised language models trained on entire protein databases capture hidden contextual and structural information in amino acid sequences and are capable of improving sequence-to-function predictions. Yet, recent studies have reported that current compound-protein modeling approaches perform poorly on learning interactions between enzymes and substrates of interest within one protein family. We attribute this to low-grade substrate encoding methods and overcompressed sequence representations received by downstream predictive models. In this study, we propose a new substrate-encoding based on Extended Connectivity Fingerprints (ECFPs) and a convolutional-pooling of the sequence embeddings. Through testing on an activity profiling dataset of haloalkanoate dehalogenase superfamily that measures activities of 218 phosphatases against 168 substrates, we show substantial improvements in predictive performances of compound-protein interaction modeling. In addition, we also test the workflow on three other datasets from the halogenase, kinase and aminotransferase families and show that our pipeline achieves good performance on these datasets as well. We further demonstrate the utility of this downstream model architecture by showing that it achieves good performance with six different protein embeddings, including ESM-1b, TAPE, ProtBert, ProtAlbert, ProtT5, and ProtXLNet. This study provides a new workflow for activity prediction on novel substrates that can be used to engineer new enzymes for sustainability applications.} }
Endnote
%0 Conference Paper %T Enzyme Activity Prediction of Sequence Variants on Novel Substrates using Improved Substrate Encodings and Convolutional Pooling %A Zhiqing Xu %A Jinghao Wu %A Yun S. Song %A Radhakrishnan Mahadevan %B Proceedings of the 16th Machine Learning in Computational Biology meeting %C Proceedings of Machine Learning Research %D 2022 %E David A. Knowles %E Sara Mostafavi %E Su-In Lee %F pmlr-v165-xu22a %I PMLR %P 78--87 %U https://proceedings.mlr.press/v165/xu22a.html %V 165 %X Protein engineering is currently being revolutionized by deep learning applications, especially through natural language processing (NLP) techniques. It has been shown that state-of-the-art self-supervised language models trained on entire protein databases capture hidden contextual and structural information in amino acid sequences and are capable of improving sequence-to-function predictions. Yet, recent studies have reported that current compound-protein modeling approaches perform poorly on learning interactions between enzymes and substrates of interest within one protein family. We attribute this to low-grade substrate encoding methods and overcompressed sequence representations received by downstream predictive models. In this study, we propose a new substrate-encoding based on Extended Connectivity Fingerprints (ECFPs) and a convolutional-pooling of the sequence embeddings. Through testing on an activity profiling dataset of haloalkanoate dehalogenase superfamily that measures activities of 218 phosphatases against 168 substrates, we show substantial improvements in predictive performances of compound-protein interaction modeling. In addition, we also test the workflow on three other datasets from the halogenase, kinase and aminotransferase families and show that our pipeline achieves good performance on these datasets as well. We further demonstrate the utility of this downstream model architecture by showing that it achieves good performance with six different protein embeddings, including ESM-1b, TAPE, ProtBert, ProtAlbert, ProtT5, and ProtXLNet. This study provides a new workflow for activity prediction on novel substrates that can be used to engineer new enzymes for sustainability applications.
APA
Xu, Z., Wu, J., Song, Y.S. & Mahadevan, R.. (2022). Enzyme Activity Prediction of Sequence Variants on Novel Substrates using Improved Substrate Encodings and Convolutional Pooling. Proceedings of the 16th Machine Learning in Computational Biology meeting, in Proceedings of Machine Learning Research 165:78-87 Available from https://proceedings.mlr.press/v165/xu22a.html.

Related Material