{"title":"Enhanced dual contrast representation learning with cell separation and merging for breast cancer diagnosis","authors":"","doi":"10.1016/j.cviu.2024.104065","DOIUrl":null,"url":null,"abstract":"<div><p>Breast cancer remains a prevalent malignancy impacting a substantial number of individuals globally. In recent times, there has been a growing trend of combining deep learning methods with breast cancer diagnosis. Nevertheless, this integration encounters challenges, including limited data availability, class imbalance, and the absence of fine-grained labels to safeguard patient privacy and accommodate experience-dependent detection. To address these issues, we propose an effective framework by a dual contrast representation learning with a cell separation and merging strategy. The proposed algorithm comprises three main components: the cell separation and merging part, the dual contrast representation learning part, and the multi-category classification part. The cell separation and merging part takes an unpaired set of histopathological images as input and produces two sets of separated image layers, through the exploration of latent semantic information using SAM. Subsequently, these separated image layers are utilized to generate two new unpaired histopathological images via a cell separation and merging approach based on the linear superimposition model, with an inpainting network being employed to refine image details. Thus the class imbalance problem is alleviated and the data size is enlarged for a sufficient CNN training. The second part introduces a dual contrast representation learning framework for these generated images, with one branch designed for the positive samples (tumor cells) and the other for the negative samples (normal cells). The contrast learning network effectively minimizes the distance between two generated positive samples while maximizing the similarity of intra-class images to enhance feature representation. Leveraging the facilitated feature representation acquired from the dual contrast representation learning part, a pre-trained classifier is further fine-tuned to predict breast cancer categories. Extensive quantitative and qualitative experimental results validates the superiority of our proposed method compared to other state-of-the-art methods on the BreaKHis dataset in terms of four measurement metrics.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224001462","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Breast cancer remains a prevalent malignancy impacting a substantial number of individuals globally. In recent times, there has been a growing trend of combining deep learning methods with breast cancer diagnosis. Nevertheless, this integration encounters challenges, including limited data availability, class imbalance, and the absence of fine-grained labels to safeguard patient privacy and accommodate experience-dependent detection. To address these issues, we propose an effective framework by a dual contrast representation learning with a cell separation and merging strategy. The proposed algorithm comprises three main components: the cell separation and merging part, the dual contrast representation learning part, and the multi-category classification part. The cell separation and merging part takes an unpaired set of histopathological images as input and produces two sets of separated image layers, through the exploration of latent semantic information using SAM. Subsequently, these separated image layers are utilized to generate two new unpaired histopathological images via a cell separation and merging approach based on the linear superimposition model, with an inpainting network being employed to refine image details. Thus the class imbalance problem is alleviated and the data size is enlarged for a sufficient CNN training. The second part introduces a dual contrast representation learning framework for these generated images, with one branch designed for the positive samples (tumor cells) and the other for the negative samples (normal cells). The contrast learning network effectively minimizes the distance between two generated positive samples while maximizing the similarity of intra-class images to enhance feature representation. Leveraging the facilitated feature representation acquired from the dual contrast representation learning part, a pre-trained classifier is further fine-tuned to predict breast cancer categories. Extensive quantitative and qualitative experimental results validates the superiority of our proposed method compared to other state-of-the-art methods on the BreaKHis dataset in terms of four measurement metrics.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems