Diego Machado Reyes, Myson Burch, Laxmi Parida, Aritra Bose
{"title":"从脑成像表型中学习遗传关联的基础模型。","authors":"Diego Machado Reyes, Myson Burch, Laxmi Parida, Aritra Bose","doi":"10.1093/bioadv/vbaf196","DOIUrl":null,"url":null,"abstract":"<p><strong>Motivation: </strong>Due to the intricate etiology of neurological disorders, finding interpretable associations between multiomics features can be challenging using standard approaches.</p><p><strong>Results: </strong>We propose COMICAL, a contrastive learning approach using multiomics data to generate associations between genetic markers and brain imaging-derived phenotypes. COMICAL jointly learns omics representations utilizing transformer-based encoders with custom tokenizers. Our modality-agnostic approach uniquely identifies many-to-many associations via self-supervised learning schemes and cross-modal attention encoders. COMICAL discovered several significant associations between genetic markers and imaging-derived phenotypes for a variety of neurological disorders in the UK Biobank, as well as prediction of diseases and unseen clinical outcomes from learned representations.</p><p><strong>Availability and implementation: </strong>The source code of COMICAL along with pretrained weights, enabling transfer learning, is available at https://github.com/IBM/comical.</p>","PeriodicalId":72368,"journal":{"name":"Bioinformatics advances","volume":"5 1","pages":"vbaf196"},"PeriodicalIF":2.8000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410928/pdf/","citationCount":"0","resultStr":"{\"title\":\"A foundation model for learning genetic associations from brain imaging phenotypes.\",\"authors\":\"Diego Machado Reyes, Myson Burch, Laxmi Parida, Aritra Bose\",\"doi\":\"10.1093/bioadv/vbaf196\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Motivation: </strong>Due to the intricate etiology of neurological disorders, finding interpretable associations between multiomics features can be challenging using standard approaches.</p><p><strong>Results: </strong>We propose COMICAL, a contrastive learning approach using multiomics data to generate associations between genetic markers and brain imaging-derived phenotypes. COMICAL jointly learns omics representations utilizing transformer-based encoders with custom tokenizers. Our modality-agnostic approach uniquely identifies many-to-many associations via self-supervised learning schemes and cross-modal attention encoders. COMICAL discovered several significant associations between genetic markers and imaging-derived phenotypes for a variety of neurological disorders in the UK Biobank, as well as prediction of diseases and unseen clinical outcomes from learned representations.</p><p><strong>Availability and implementation: </strong>The source code of COMICAL along with pretrained weights, enabling transfer learning, is available at https://github.com/IBM/comical.</p>\",\"PeriodicalId\":72368,\"journal\":{\"name\":\"Bioinformatics advances\",\"volume\":\"5 1\",\"pages\":\"vbaf196\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410928/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bioinformatics advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/bioadv/vbaf196\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioinformatics advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bioadv/vbaf196","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
A foundation model for learning genetic associations from brain imaging phenotypes.
Motivation: Due to the intricate etiology of neurological disorders, finding interpretable associations between multiomics features can be challenging using standard approaches.
Results: We propose COMICAL, a contrastive learning approach using multiomics data to generate associations between genetic markers and brain imaging-derived phenotypes. COMICAL jointly learns omics representations utilizing transformer-based encoders with custom tokenizers. Our modality-agnostic approach uniquely identifies many-to-many associations via self-supervised learning schemes and cross-modal attention encoders. COMICAL discovered several significant associations between genetic markers and imaging-derived phenotypes for a variety of neurological disorders in the UK Biobank, as well as prediction of diseases and unseen clinical outcomes from learned representations.
Availability and implementation: The source code of COMICAL along with pretrained weights, enabling transfer learning, is available at https://github.com/IBM/comical.