Huiwen Long, Yongquan Jiang, Yan Yang, Kuanping Gong
{"title":"MolContraCLIP: Structurally similar molecule retrieval algorithm based on graph neural network and CLIP model","authors":"Huiwen Long, Yongquan Jiang, Yan Yang, Kuanping Gong","doi":"10.1016/j.jmgm.2025.109172","DOIUrl":null,"url":null,"abstract":"<div><div>Molecular similarity assessment is pivotal in drug discovery and materials science, yet conventional methods often fail to integrate complementary 2D topological and 3D geometric information effectively. Inspired by Radford et al. (2021) the cross-modal alignment capability of Contrastive Language-Image Pretraining (CLIP), this study proposes a novel graph neural network (GNN) framework that unifies 2D and 3D molecular representations through a CLIP-inspired contrastive learning strategy. Our dual-channel architecture employs a Graph Isomorphism Network (GIN) for 2D topology encoding and a Graph Attention Network (GAT) for 3D spatial feature extraction. These modality-specific embeddings are aligned in a shared latent space via the InfoNCE loss, emulating CLIP’s paradigm to maximize mutual information between 2D and 3D molecular structures. Extensive experiments on the QM9 dataset demonstrate that our model significantly outperforms traditional fingerprint-based methods and pure GNN baselines in molecular similarity assessment. Ablation studies further validate the critical role of cross-modal contrastive learning in bridging structural information. The framework exhibits robust generalizability across diverse molecular types, offering a pioneering adaptation of CLIP’s principles to non-visual domains. This work advances multimodal representation learning in cheminformatics and opens avenues for future applications in molecular-text retrieval and drug design.</div></div>","PeriodicalId":16361,"journal":{"name":"Journal of molecular graphics & modelling","volume":"142 ","pages":"Article 109172"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of molecular graphics & modelling","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1093326325002323","RegionNum":4,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Molecular similarity assessment is pivotal in drug discovery and materials science, yet conventional methods often fail to integrate complementary 2D topological and 3D geometric information effectively. Inspired by Radford et al. (2021) the cross-modal alignment capability of Contrastive Language-Image Pretraining (CLIP), this study proposes a novel graph neural network (GNN) framework that unifies 2D and 3D molecular representations through a CLIP-inspired contrastive learning strategy. Our dual-channel architecture employs a Graph Isomorphism Network (GIN) for 2D topology encoding and a Graph Attention Network (GAT) for 3D spatial feature extraction. These modality-specific embeddings are aligned in a shared latent space via the InfoNCE loss, emulating CLIP’s paradigm to maximize mutual information between 2D and 3D molecular structures. Extensive experiments on the QM9 dataset demonstrate that our model significantly outperforms traditional fingerprint-based methods and pure GNN baselines in molecular similarity assessment. Ablation studies further validate the critical role of cross-modal contrastive learning in bridging structural information. The framework exhibits robust generalizability across diverse molecular types, offering a pioneering adaptation of CLIP’s principles to non-visual domains. This work advances multimodal representation learning in cheminformatics and opens avenues for future applications in molecular-text retrieval and drug design.
期刊介绍:
The Journal of Molecular Graphics and Modelling is devoted to the publication of papers on the uses of computers in theoretical investigations of molecular structure, function, interaction, and design. The scope of the journal includes all aspects of molecular modeling and computational chemistry, including, for instance, the study of molecular shape and properties, molecular simulations, protein and polymer engineering, drug design, materials design, structure-activity and structure-property relationships, database mining, and compound library design.
As a primary research journal, JMGM seeks to bring new knowledge to the attention of our readers. As such, submissions to the journal need to not only report results, but must draw conclusions and explore implications of the work presented. Authors are strongly encouraged to bear this in mind when preparing manuscripts. Routine applications of standard modelling approaches, providing only very limited new scientific insight, will not meet our criteria for publication. Reproducibility of reported calculations is an important issue. Wherever possible, we urge authors to enhance their papers with Supplementary Data, for example, in QSAR studies machine-readable versions of molecular datasets or in the development of new force-field parameters versions of the topology and force field parameter files. Routine applications of existing methods that do not lead to genuinely new insight will not be considered.