Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo
{"title":"Towards Automated Semantic Segmentation in Mammography Images for Enhanced Clinical Applications.","authors":"Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo","doi":"10.1007/s10278-024-01364-8","DOIUrl":null,"url":null,"abstract":"<p><p>Mammography images are widely used to detect non-palpable breast lesions or nodules, aiding in cancer prevention and enabling timely intervention when necessary. To support medical analysis, computer-aided detection systems can automate the segmentation of landmark structures, which is helpful in locating abnormalities and evaluating image acquisition adequacy. This paper presents a deep learning-based framework for segmenting the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue in standard-view mammography images. To the best of our knowledge, we introduce the largest dataset dedicated to mammography segmentation of key anatomical structures, specifically designed to train deep learning models for this task. Through comprehensive experiments, we evaluated various deep learning model architectures and training configurations, demonstrating robust segmentation performance across diverse and challenging cases. These results underscore the framework's potential for clinical integration. In our experiments, four semantic segmentation architectures were compared, all showing suitability for the target problem, thereby offering flexibility in model selection. Beyond segmentation, we introduce a suite of applications derived from this framework to assist in clinical assessments. These include automating tasks such as multi-view lesion registration and anatomical position estimation, evaluating image acquisition quality, measuring breast density, and enhancing visualization of breast tissues, thus addressing critical needs in breast cancer screening and diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01364-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Mammography images are widely used to detect non-palpable breast lesions or nodules, aiding in cancer prevention and enabling timely intervention when necessary. To support medical analysis, computer-aided detection systems can automate the segmentation of landmark structures, which is helpful in locating abnormalities and evaluating image acquisition adequacy. This paper presents a deep learning-based framework for segmenting the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue in standard-view mammography images. To the best of our knowledge, we introduce the largest dataset dedicated to mammography segmentation of key anatomical structures, specifically designed to train deep learning models for this task. Through comprehensive experiments, we evaluated various deep learning model architectures and training configurations, demonstrating robust segmentation performance across diverse and challenging cases. These results underscore the framework's potential for clinical integration. In our experiments, four semantic segmentation architectures were compared, all showing suitability for the target problem, thereby offering flexibility in model selection. Beyond segmentation, we introduce a suite of applications derived from this framework to assist in clinical assessments. These include automating tasks such as multi-view lesion registration and anatomical position estimation, evaluating image acquisition quality, measuring breast density, and enhancing visualization of breast tissues, thus addressing critical needs in breast cancer screening and diagnosis.