Qian Chen , Zihang Lin , Xudong Li , Jingyuan Zheng , Yan Zhang , Rongrong Ji
{"title":"Multi-scale and contrastive learning for pediatric chest radiograph classification tasks","authors":"Qian Chen , Zihang Lin , Xudong Li , Jingyuan Zheng , Yan Zhang , Rongrong Ji","doi":"10.1016/j.displa.2024.102951","DOIUrl":null,"url":null,"abstract":"<div><div>Pediatric medical image classification faces enormous challenges due to the subtlety of children’s physiology, the subtle manifestations of pathological changes, and the urgent need for accurate and timely diagnosis. This complexity is further exacerbated by the high variability in image quality, the small sample sizes of rare diseases, and the need for models to generalize well over diverse and often limited datasets. Addressing these challenges is imperative to improve pediatric healthcare outcomes. To this end, this paper proposes a model that combines contrastive learning and multi-scale theory, which simulates the behavior of the eye zooming in and out of an image when a physician is looking at a medical imaging picture. First, we zoom in and out the image and then perform feature extraction and blending by feature encoder and scale integration unit for the purpose of learning the fine texture and global feature of the lesion. At the same time, we write a series of texts for the disease category that needs to be diagnosed and get its features through text encoder. Considering the further fusion of image features, we also introduce a frozen LLM block to do it. Finally, we use text features and image features for similarity computation, the crucial step of contrastive learning, and obtain the final categories. On four public datasets, our proposed model performs excellently and outperforms existing SOTA methods. In addition, our model also performs well in generalized proficiency testing, particularly in IQA. With this work, we aim to open new avenues for the use of contrastive learning and multi-scale theory in pediatric medical imaging and to enrich the understanding of its potential in this specialized field.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102951"},"PeriodicalIF":3.7000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224003159","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Pediatric medical image classification faces enormous challenges due to the subtlety of children’s physiology, the subtle manifestations of pathological changes, and the urgent need for accurate and timely diagnosis. This complexity is further exacerbated by the high variability in image quality, the small sample sizes of rare diseases, and the need for models to generalize well over diverse and often limited datasets. Addressing these challenges is imperative to improve pediatric healthcare outcomes. To this end, this paper proposes a model that combines contrastive learning and multi-scale theory, which simulates the behavior of the eye zooming in and out of an image when a physician is looking at a medical imaging picture. First, we zoom in and out the image and then perform feature extraction and blending by feature encoder and scale integration unit for the purpose of learning the fine texture and global feature of the lesion. At the same time, we write a series of texts for the disease category that needs to be diagnosed and get its features through text encoder. Considering the further fusion of image features, we also introduce a frozen LLM block to do it. Finally, we use text features and image features for similarity computation, the crucial step of contrastive learning, and obtain the final categories. On four public datasets, our proposed model performs excellently and outperforms existing SOTA methods. In addition, our model also performs well in generalized proficiency testing, particularly in IQA. With this work, we aim to open new avenues for the use of contrastive learning and multi-scale theory in pediatric medical imaging and to enrich the understanding of its potential in this specialized field.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.