{"title":"Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models","authors":"Pengzhou Cai, Xueyuan Zhang, Ze Zhao","doi":"arxiv-2409.11752","DOIUrl":null,"url":null,"abstract":"In recent years, significant progress has been made in tumor segmentation\nwithin the field of digital pathology. However, variations in organs, tissue\npreparation methods, and image acquisition processes can lead to domain\ndiscrepancies among digital pathology images. To address this problem, in this\npaper, we use Rein, a fine-tuning method, to parametrically and efficiently\nfine-tune various vision foundation models (VFMs) for MICCAI 2024 Cross-Organ\nand Cross-Scanner Adenocarcinoma Segmentation (COSAS2024). The core of Rein\nconsists of a set of learnable tokens, which are directly linked to instances,\nimproving functionality at the instance level in each layer. In the data\nenvironment of the COSAS2024 Challenge, extensive experiments demonstrate that\nRein fine-tuned the VFMs to achieve satisfactory results. Specifically, we used\nRein to fine-tune ConvNeXt and DINOv2. Our team used the former to achieve\nscores of 0.7719 and 0.7557 on the preliminary test phase and final test phase\nin task1, respectively, while the latter achieved scores of 0.8848 and 0.8192\non the preliminary test phase and final test phase in task2. Code is available\nat GitHub.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, significant progress has been made in tumor segmentation
within the field of digital pathology. However, variations in organs, tissue
preparation methods, and image acquisition processes can lead to domain
discrepancies among digital pathology images. To address this problem, in this
paper, we use Rein, a fine-tuning method, to parametrically and efficiently
fine-tune various vision foundation models (VFMs) for MICCAI 2024 Cross-Organ
and Cross-Scanner Adenocarcinoma Segmentation (COSAS2024). The core of Rein
consists of a set of learnable tokens, which are directly linked to instances,
improving functionality at the instance level in each layer. In the data
environment of the COSAS2024 Challenge, extensive experiments demonstrate that
Rein fine-tuned the VFMs to achieve satisfactory results. Specifically, we used
Rein to fine-tune ConvNeXt and DINOv2. Our team used the former to achieve
scores of 0.7719 and 0.7557 on the preliminary test phase and final test phase
in task1, respectively, while the latter achieved scores of 0.8848 and 0.8192
on the preliminary test phase and final test phase in task2. Code is available
at GitHub.