MolContraCLIP: Structurally similar molecule retrieval algorithm based on graph neural network and CLIP model

IF 3 4区 生物学 Q2 BIOCHEMICAL RESEARCH METHODS
Huiwen Long, Yongquan Jiang, Yan Yang, Kuanping Gong
{"title":"MolContraCLIP: Structurally similar molecule retrieval algorithm based on graph neural network and CLIP model","authors":"Huiwen Long,&nbsp;Yongquan Jiang,&nbsp;Yan Yang,&nbsp;Kuanping Gong","doi":"10.1016/j.jmgm.2025.109172","DOIUrl":null,"url":null,"abstract":"<div><div>Molecular similarity assessment is pivotal in drug discovery and materials science, yet conventional methods often fail to integrate complementary 2D topological and 3D geometric information effectively. Inspired by Radford et al. (2021) the cross-modal alignment capability of Contrastive Language-Image Pretraining (CLIP), this study proposes a novel graph neural network (GNN) framework that unifies 2D and 3D molecular representations through a CLIP-inspired contrastive learning strategy. Our dual-channel architecture employs a Graph Isomorphism Network (GIN) for 2D topology encoding and a Graph Attention Network (GAT) for 3D spatial feature extraction. These modality-specific embeddings are aligned in a shared latent space via the InfoNCE loss, emulating CLIP’s paradigm to maximize mutual information between 2D and 3D molecular structures. Extensive experiments on the QM9 dataset demonstrate that our model significantly outperforms traditional fingerprint-based methods and pure GNN baselines in molecular similarity assessment. Ablation studies further validate the critical role of cross-modal contrastive learning in bridging structural information. The framework exhibits robust generalizability across diverse molecular types, offering a pioneering adaptation of CLIP’s principles to non-visual domains. This work advances multimodal representation learning in cheminformatics and opens avenues for future applications in molecular-text retrieval and drug design.</div></div>","PeriodicalId":16361,"journal":{"name":"Journal of molecular graphics & modelling","volume":"142 ","pages":"Article 109172"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of molecular graphics & modelling","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1093326325002323","RegionNum":4,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Molecular similarity assessment is pivotal in drug discovery and materials science, yet conventional methods often fail to integrate complementary 2D topological and 3D geometric information effectively. Inspired by Radford et al. (2021) the cross-modal alignment capability of Contrastive Language-Image Pretraining (CLIP), this study proposes a novel graph neural network (GNN) framework that unifies 2D and 3D molecular representations through a CLIP-inspired contrastive learning strategy. Our dual-channel architecture employs a Graph Isomorphism Network (GIN) for 2D topology encoding and a Graph Attention Network (GAT) for 3D spatial feature extraction. These modality-specific embeddings are aligned in a shared latent space via the InfoNCE loss, emulating CLIP’s paradigm to maximize mutual information between 2D and 3D molecular structures. Extensive experiments on the QM9 dataset demonstrate that our model significantly outperforms traditional fingerprint-based methods and pure GNN baselines in molecular similarity assessment. Ablation studies further validate the critical role of cross-modal contrastive learning in bridging structural information. The framework exhibits robust generalizability across diverse molecular types, offering a pioneering adaptation of CLIP’s principles to non-visual domains. This work advances multimodal representation learning in cheminformatics and opens avenues for future applications in molecular-text retrieval and drug design.

Abstract Image

MolContraCLIP:基于图神经网络和CLIP模型的结构相似分子检索算法。
分子相似性评估在药物发现和材料科学中至关重要,然而传统的方法往往无法有效地整合互补的二维拓扑和三维几何信息。受Radford等人(2021)对比语言-图像预训练(CLIP)的跨模态对齐能力的启发,本研究提出了一种新的图神经网络(GNN)框架,该框架通过受CLIP启发的对比学习策略统一了2D和3D分子表示。我们的双通道架构使用图同构网络(GIN)进行二维拓扑编码,使用图注意网络(GAT)进行三维空间特征提取。这些模态特定的嵌入通过InfoNCE丢失在共享的潜在空间中对齐,模拟CLIP的范例,最大限度地提高了2D和3D分子结构之间的相互信息。在QM9数据集上的大量实验表明,我们的模型在分子相似性评估方面明显优于传统的基于指纹的方法和纯GNN基线。消融研究进一步验证了跨模态对比学习在桥接结构信息中的关键作用。该框架在不同的分子类型中表现出强大的通用性,为CLIP的原理在非视觉领域提供了开创性的适应。这项工作推进了化学信息学中的多模态表示学习,并为分子文本检索和药物设计的未来应用开辟了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of molecular graphics & modelling
Journal of molecular graphics & modelling 生物-计算机:跨学科应用
CiteScore
5.50
自引率
6.90%
发文量
216
审稿时长
35 days
期刊介绍: The Journal of Molecular Graphics and Modelling is devoted to the publication of papers on the uses of computers in theoretical investigations of molecular structure, function, interaction, and design. The scope of the journal includes all aspects of molecular modeling and computational chemistry, including, for instance, the study of molecular shape and properties, molecular simulations, protein and polymer engineering, drug design, materials design, structure-activity and structure-property relationships, database mining, and compound library design. As a primary research journal, JMGM seeks to bring new knowledge to the attention of our readers. As such, submissions to the journal need to not only report results, but must draw conclusions and explore implications of the work presented. Authors are strongly encouraged to bear this in mind when preparing manuscripts. Routine applications of standard modelling approaches, providing only very limited new scientific insight, will not meet our criteria for publication. Reproducibility of reported calculations is an important issue. Wherever possible, we urge authors to enhance their papers with Supplementary Data, for example, in QSAR studies machine-readable versions of molecular datasets or in the development of new force-field parameters versions of the topology and force field parameter files. Routine applications of existing methods that do not lead to genuinely new insight will not be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信