Multimodal Pre-Training Model for Sequence-based Prediction of Protein-Protein Interaction

Yang Xue, Zijing Liu, Xiaomin Fang, Fan Wang
Proceedings of the 16th Machine Learning in Computational Biology meeting, PMLR 165:34-46, 2022.

Abstract

Protein-protein interactions (PPIs) are essentials for many biological processes where two or more proteins physically bind together to achieve their functions. Modeling PPIs is useful for many biomedical applications, such as vaccine design, antibody therapeutics, and peptide drug discovery. Pre-training a protein model to learn effective representation is critical for PPIs. Most pre-training models for PPIs are sequence-based, which naively adopt the language models used in natural language processing to amino acid sequences. More advanced works utilize the structure-aware pre-training technique, taking advantage of the contact maps of known protein structures. However, neither sequences nor contact maps can fully characterize structures and functions of the proteins, which are closely related to the PPI problem. Inspired by this insight, we propose a multimodal protein pre-training model with three modalities: sequence, structure, and function (S2F). Notably, instead of using contact maps to learn the amino acid-level rigid structures, we encode the structure feature with the topology complex of point clouds of heavy atoms. It allows our model to learn structural information about not only the backbones but also the side chains. Moreover, our model incorporates the knowledge from the functional description of proteins extracted from literature or manual annotations. Our experiments show that the S2F learns protein embeddings that achieve good performances on a variety of PPIs tasks, including cross-species PPI, antibody-antigen affinity prediction, antibody neutralization prediction for SARS-CoV-2, and mutation-driven binding affinity change prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v165-xue22a, title = {Multimodal Pre-Training Model for Sequence-based Prediction of Protein-Protein Interaction}, author = {Xue, Yang and Liu, Zijing and Fang, Xiaomin and Wang, Fan}, booktitle = {Proceedings of the 16th Machine Learning in Computational Biology meeting}, pages = {34--46}, year = {2022}, editor = {Knowles, David A. and Mostafavi, Sara and Lee, Su-In}, volume = {165}, series = {Proceedings of Machine Learning Research}, month = {22--23 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v165/xue22a/xue22a.pdf}, url = {https://proceedings.mlr.press/v165/xue22a.html}, abstract = {Protein-protein interactions (PPIs) are essentials for many biological processes where two or more proteins physically bind together to achieve their functions. Modeling PPIs is useful for many biomedical applications, such as vaccine design, antibody therapeutics, and peptide drug discovery. Pre-training a protein model to learn effective representation is critical for PPIs. Most pre-training models for PPIs are sequence-based, which naively adopt the language models used in natural language processing to amino acid sequences. More advanced works utilize the structure-aware pre-training technique, taking advantage of the contact maps of known protein structures. However, neither sequences nor contact maps can fully characterize structures and functions of the proteins, which are closely related to the PPI problem. Inspired by this insight, we propose a multimodal protein pre-training model with three modalities: sequence, structure, and function (S2F). Notably, instead of using contact maps to learn the amino acid-level rigid structures, we encode the structure feature with the topology complex of point clouds of heavy atoms. It allows our model to learn structural information about not only the backbones but also the side chains. Moreover, our model incorporates the knowledge from the functional description of proteins extracted from literature or manual annotations. Our experiments show that the S2F learns protein embeddings that achieve good performances on a variety of PPIs tasks, including cross-species PPI, antibody-antigen affinity prediction, antibody neutralization prediction for SARS-CoV-2, and mutation-driven binding affinity change prediction.} }
Endnote
%0 Conference Paper %T Multimodal Pre-Training Model for Sequence-based Prediction of Protein-Protein Interaction %A Yang Xue %A Zijing Liu %A Xiaomin Fang %A Fan Wang %B Proceedings of the 16th Machine Learning in Computational Biology meeting %C Proceedings of Machine Learning Research %D 2022 %E David A. Knowles %E Sara Mostafavi %E Su-In Lee %F pmlr-v165-xue22a %I PMLR %P 34--46 %U https://proceedings.mlr.press/v165/xue22a.html %V 165 %X Protein-protein interactions (PPIs) are essentials for many biological processes where two or more proteins physically bind together to achieve their functions. Modeling PPIs is useful for many biomedical applications, such as vaccine design, antibody therapeutics, and peptide drug discovery. Pre-training a protein model to learn effective representation is critical for PPIs. Most pre-training models for PPIs are sequence-based, which naively adopt the language models used in natural language processing to amino acid sequences. More advanced works utilize the structure-aware pre-training technique, taking advantage of the contact maps of known protein structures. However, neither sequences nor contact maps can fully characterize structures and functions of the proteins, which are closely related to the PPI problem. Inspired by this insight, we propose a multimodal protein pre-training model with three modalities: sequence, structure, and function (S2F). Notably, instead of using contact maps to learn the amino acid-level rigid structures, we encode the structure feature with the topology complex of point clouds of heavy atoms. It allows our model to learn structural information about not only the backbones but also the side chains. Moreover, our model incorporates the knowledge from the functional description of proteins extracted from literature or manual annotations. Our experiments show that the S2F learns protein embeddings that achieve good performances on a variety of PPIs tasks, including cross-species PPI, antibody-antigen affinity prediction, antibody neutralization prediction for SARS-CoV-2, and mutation-driven binding affinity change prediction.
APA
Xue, Y., Liu, Z., Fang, X. & Wang, F.. (2022). Multimodal Pre-Training Model for Sequence-based Prediction of Protein-Protein Interaction. Proceedings of the 16th Machine Learning in Computational Biology meeting, in Proceedings of Machine Learning Research 165:34-46 Available from https://proceedings.mlr.press/v165/xue22a.html.

Related Material