{"title":"Segmentation of Breast Ultrasound Images using Densely Connected Deep Convolutional Neural Network and Attention Gates","authors":"Niranjan Thirusangu, M. Almekkawy","doi":"10.1109/LAUS53676.2021.9639178","DOIUrl":null,"url":null,"abstract":"Ultrasound imagining modality is a popular complementary technique for diagnosing breast cancer. A standardized reporting process called Breast imaging reporting and data system (BI-RADS) is used to categorize breast cancer. The BI-RADS scale uses several features of lesions based on the ultrasound images, which makes the quality of the diagnosis highly dependent on the experience of the radiologist. Radiologists use Computer-Aided Diagnosis (CAD) system to help in the detection of lesions. The accuracy of a CAD system depends greatly on the segmentation stage of the system. To increase the reliability of the diagnosis, we propose a solution based on a densely connected deep convolutional neural network and attention gates, called Attention U-DenseNet. Attention U-DenseNet is an architecture to do semantic segmentation of the lesions from Breast Ultrasound (BUS) images based on the U-Net, DenseNet, and attention gates. Convolutional layers of the U-Net are made densely connected using dense blocks to help to learn complex patterns of the BUS image which is usually noisy and contaminated with speckles. This architecture (U-DenseNet) produced an F-score of 0.63 compared to the U-Net model with an F-score of 0.49. Furthermore, to localize the segmentation by learning salient features, attention gates are added to the U-DenseNet architecture (Attention U-DenseNet). Attention U-DenseNet performed even better compared to U-DenseNet, by improving the F-score to 0.75. Finally, a per-image regularised binary cross-entropy is employed to penalize false negatives more than false positives, since the region of interest is small.","PeriodicalId":156639,"journal":{"name":"2021 IEEE UFFC Latin America Ultrasonics Symposium (LAUS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE UFFC Latin America Ultrasonics Symposium (LAUS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LAUS53676.2021.9639178","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Ultrasound imagining modality is a popular complementary technique for diagnosing breast cancer. A standardized reporting process called Breast imaging reporting and data system (BI-RADS) is used to categorize breast cancer. The BI-RADS scale uses several features of lesions based on the ultrasound images, which makes the quality of the diagnosis highly dependent on the experience of the radiologist. Radiologists use Computer-Aided Diagnosis (CAD) system to help in the detection of lesions. The accuracy of a CAD system depends greatly on the segmentation stage of the system. To increase the reliability of the diagnosis, we propose a solution based on a densely connected deep convolutional neural network and attention gates, called Attention U-DenseNet. Attention U-DenseNet is an architecture to do semantic segmentation of the lesions from Breast Ultrasound (BUS) images based on the U-Net, DenseNet, and attention gates. Convolutional layers of the U-Net are made densely connected using dense blocks to help to learn complex patterns of the BUS image which is usually noisy and contaminated with speckles. This architecture (U-DenseNet) produced an F-score of 0.63 compared to the U-Net model with an F-score of 0.49. Furthermore, to localize the segmentation by learning salient features, attention gates are added to the U-DenseNet architecture (Attention U-DenseNet). Attention U-DenseNet performed even better compared to U-DenseNet, by improving the F-score to 0.75. Finally, a per-image regularised binary cross-entropy is employed to penalize false negatives more than false positives, since the region of interest is small.