Yuran Xiang, Haiteng Zhao, Chang Ma, Zhi-Hong Deng
{"title":"Instruction-Based Molecular Graph Generation with Unified Text-Graph Diffusion Model","authors":"Yuran Xiang, Haiteng Zhao, Chang Ma, Zhi-Hong Deng","doi":"arxiv-2408.09896","DOIUrl":null,"url":null,"abstract":"Recent advancements in computational chemistry have increasingly focused on\nsynthesizing molecules based on textual instructions. Integrating graph\ngeneration with these instructions is complex, leading most current methods to\nuse molecular sequences with pre-trained large language models. In response to\nthis challenge, we propose a novel framework, named $\\textbf{UTGDiff (Unified\nText-Graph Diffusion Model)}$, which utilizes language models for discrete\ngraph diffusion to generate molecular graphs from instructions. UTGDiff\nfeatures a unified text-graph transformer as the denoising network, derived\nfrom pre-trained language models and minimally modified to process graph data\nthrough attention bias. Our experimental results demonstrate that UTGDiff\nconsistently outperforms sequence-based baselines in tasks involving\ninstruction-based molecule generation and editing, achieving superior\nperformance with fewer parameters given an equivalent level of pretraining\ncorpus. Our code is availble at https://github.com/ran1812/UTGDiff.","PeriodicalId":501022,"journal":{"name":"arXiv - QuanBio - Biomolecules","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Biomolecules","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in computational chemistry have increasingly focused on
synthesizing molecules based on textual instructions. Integrating graph
generation with these instructions is complex, leading most current methods to
use molecular sequences with pre-trained large language models. In response to
this challenge, we propose a novel framework, named $\textbf{UTGDiff (Unified
Text-Graph Diffusion Model)}$, which utilizes language models for discrete
graph diffusion to generate molecular graphs from instructions. UTGDiff
features a unified text-graph transformer as the denoising network, derived
from pre-trained language models and minimally modified to process graph data
through attention bias. Our experimental results demonstrate that UTGDiff
consistently outperforms sequence-based baselines in tasks involving
instruction-based molecule generation and editing, achieving superior
performance with fewer parameters given an equivalent level of pretraining
corpus. Our code is availble at https://github.com/ran1812/UTGDiff.