{"title":"GraphBPE: Molecular Graphs Meet Byte-Pair Encoding","authors":"Yuchen Shen, Barnabás Póczos","doi":"arxiv-2407.19039","DOIUrl":null,"url":null,"abstract":"With the increasing attention to molecular machine learning, various\ninnovations have been made in designing better models or proposing more\ncomprehensive benchmarks. However, less is studied on the data preprocessing\nschedule for molecular graphs, where a different view of the molecular graph\ncould potentially boost the model's performance. Inspired by the Byte-Pair\nEncoding (BPE) algorithm, a subword tokenization method popularly adopted in\nNatural Language Processing, we propose GraphBPE, which tokenizes a molecular\ngraph into different substructures and acts as a preprocessing schedule\nindependent of the model architectures. Our experiments on 3 graph-level\nclassification and 3 graph-level regression datasets show that data\npreprocessing could boost the performance of models for molecular graphs, and\nGraphBPE is effective for small classification datasets and it performs on par\nwith other tokenization methods across different model architectures.","PeriodicalId":501022,"journal":{"name":"arXiv - QuanBio - Biomolecules","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Biomolecules","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.19039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing attention to molecular machine learning, various
innovations have been made in designing better models or proposing more
comprehensive benchmarks. However, less is studied on the data preprocessing
schedule for molecular graphs, where a different view of the molecular graph
could potentially boost the model's performance. Inspired by the Byte-Pair
Encoding (BPE) algorithm, a subword tokenization method popularly adopted in
Natural Language Processing, we propose GraphBPE, which tokenizes a molecular
graph into different substructures and acts as a preprocessing schedule
independent of the model architectures. Our experiments on 3 graph-level
classification and 3 graph-level regression datasets show that data
preprocessing could boost the performance of models for molecular graphs, and
GraphBPE is effective for small classification datasets and it performs on par
with other tokenization methods across different model architectures.