{"title":"基于深度学习的医学图像分割","authors":"S. Navya, P. Nishitha, V. Hema","doi":"10.1109/ASSIC55218.2022.10088359","DOIUrl":null,"url":null,"abstract":"The classification of medical imaging is that specialists and radiologists stick to the end of the disorder. Basic studies based on convolutional cerebrum relationships (CNNs) are used to aid flexibility at the end of the clinic. Three systems are considered to distinguish affected tissues. CNN contextually identifies every single pixel of the image as an a location that is both intriguing and uninteresting. RoI is then used to separate the impacted area. The second method removes pixel position information from image data using scalable and improved techniques (autoencoders). The non-convolutional layer separates geographic information associated with opposing features and also forgets to retrieve important ward information for prominent components of the level. In the third structure, the U-Net thought module receives the relevant ward information. Channel size, read rate, and k-crease section verification were adjusted to break the membrane similarity coefficient (DSC).","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Medical Image Segmentation Using Deep Learning\",\"authors\":\"S. Navya, P. Nishitha, V. Hema\",\"doi\":\"10.1109/ASSIC55218.2022.10088359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The classification of medical imaging is that specialists and radiologists stick to the end of the disorder. Basic studies based on convolutional cerebrum relationships (CNNs) are used to aid flexibility at the end of the clinic. Three systems are considered to distinguish affected tissues. CNN contextually identifies every single pixel of the image as an a location that is both intriguing and uninteresting. RoI is then used to separate the impacted area. The second method removes pixel position information from image data using scalable and improved techniques (autoencoders). The non-convolutional layer separates geographic information associated with opposing features and also forgets to retrieve important ward information for prominent components of the level. In the third structure, the U-Net thought module receives the relevant ward information. Channel size, read rate, and k-crease section verification were adjusted to break the membrane similarity coefficient (DSC).\",\"PeriodicalId\":441406,\"journal\":{\"name\":\"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASSIC55218.2022.10088359\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASSIC55218.2022.10088359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The classification of medical imaging is that specialists and radiologists stick to the end of the disorder. Basic studies based on convolutional cerebrum relationships (CNNs) are used to aid flexibility at the end of the clinic. Three systems are considered to distinguish affected tissues. CNN contextually identifies every single pixel of the image as an a location that is both intriguing and uninteresting. RoI is then used to separate the impacted area. The second method removes pixel position information from image data using scalable and improved techniques (autoencoders). The non-convolutional layer separates geographic information associated with opposing features and also forgets to retrieve important ward information for prominent components of the level. In the third structure, the U-Net thought module receives the relevant ward information. Channel size, read rate, and k-crease section verification were adjusted to break the membrane similarity coefficient (DSC).