Displays最新文献

筛选
英文 中文
Low temperature polysilicon pixel circuits for active-matrix digital microfluidic chips 有源矩阵数字微流控芯片的低温多晶硅像素电路
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-25 DOI: 10.1016/j.displa.2025.103048
Shengzhe Jiang , Dongping Wang , Hanbin Ma , Arokia Nathan , Jun Yu
{"title":"Low temperature polysilicon pixel circuits for active-matrix digital microfluidic chips","authors":"Shengzhe Jiang ,&nbsp;Dongping Wang ,&nbsp;Hanbin Ma ,&nbsp;Arokia Nathan ,&nbsp;Jun Yu","doi":"10.1016/j.displa.2025.103048","DOIUrl":"10.1016/j.displa.2025.103048","url":null,"abstract":"<div><div>Emerging active-matrix digital microfluidic imposes stringent requirements on driving signals, yet conventional pixel circuits used in flat panel displays struggle to provide the necessary voltage. In this paper, we present three novel pixel circuits for active-matrix digital microfluidic chips: the 3T1C, Inverter, and SRAM designs. All proposed circuits are fabricated with low-temperature polysilicon technology, which is compatible with mainstream display backplane manufacturing processes. The characteristics and positive bias stability of the implemented thin-film transistors have been validated. Additionally, a comparative analysis is conducted against existing pixel designs. The circuits’ performance was evaluated under varying Gate pulse widths and driving voltages to determine the optimal driving strategy. With the appropriate driving voltage, the SRAM structure is capable of achieving an output exceeding 20 V. Moreover, the holding time, long-term operational stability, and illumination stability of the circuits were evaluated and compared. Experimental results demonstrate that the SRAM structure outperforms in both output performance and stability. An active-matrix digital microfluidic chip with a 640 × 280 array was fabricated using the SRAM pixel structure, which highlights its scalability. The simplified circuit structure, coupled with an output voltage exceeding 20 V, addresses the limitations of conventional pixel circuits for digital microfluidic applications. These novel designs offer innovative and reliable driving solutions for LTPS-based active-matrix digital microfluidics systems, further advancing the application of display technology in non-display fields.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103048"},"PeriodicalIF":3.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient exploration of hard-to-find function in GUIs: A method and best practice gui中难以找到的函数的有效探索:一种方法和最佳实践
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-25 DOI: 10.1016/j.displa.2025.103037
Xinglong Yin, Mengxi Zhang, Tengmei Wang, Huaxiao Liu
{"title":"Efficient exploration of hard-to-find function in GUIs: A method and best practice","authors":"Xinglong Yin,&nbsp;Mengxi Zhang,&nbsp;Tengmei Wang,&nbsp;Huaxiao Liu","doi":"10.1016/j.displa.2025.103037","DOIUrl":"10.1016/j.displa.2025.103037","url":null,"abstract":"<div><div>With the proliferation of mobile applications (apps), there has been a noticeable trend towards diversification in app functionalities to cater to evolving user needs and preferences. This evolution is evident in ongoing efforts towards app feature recommendation, reflecting a concerted endeavor to enhance user experience and satisfaction. However, the increasing complexity in app functionalities, particularly within the Graphical User Interface (GUI), presents significant challenges for users to find their desired functions. Further, by conducting an online survey, we found that 85% of participants encounter difficulties in locating desired functionalities within apps, which can lead to frustration and even app abandonment. To tackle this challenge, we propose an approach that leverages GUI screenshots and layout files to analyze app functions. Our approach involves vectorizing app functions based on user search times and function descriptions, followed by personalized analysis, initial difficulty assessment, and refinement through clustering techniques. To evaluate our method, we carry out experiments on 49 apps across 8 categories demonstrate the effectiveness of our approach. Our approach achieves an accuracy rate of 91.29% on average in identifying hard-to-find functions and observes significant performance improvements after reducing random data. Feedback from developers further confirms the practical utility of our approach in crafting user-friendly GUIs and minimizing the risk of crucial functions being overlooked.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103037"},"PeriodicalIF":3.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for anterior cruciate ligament and meniscus analysis in knee MRI: A comprehensive review 膝关节 MRI 中用于前交叉韧带和半月板分析的机器学习:全面回顾
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-22 DOI: 10.1016/j.displa.2025.103032
Congjing Yu, Changzhen Qiu, Zhiyong Zhang
{"title":"Machine learning for anterior cruciate ligament and meniscus analysis in knee MRI: A comprehensive review","authors":"Congjing Yu,&nbsp;Changzhen Qiu,&nbsp;Zhiyong Zhang","doi":"10.1016/j.displa.2025.103032","DOIUrl":"10.1016/j.displa.2025.103032","url":null,"abstract":"<div><div>Anterior Cruciate Ligament (ACL) and meniscal injuries are prevalent in knee joint problems and are closely associated with osteoarthritis. With the rapid development of machine learning (ML) in knee magnetic resonance imaging (MRI) interpretation, a surge of research on ACL and meniscus analysis has emerged. However, there has been a noticeable absence of comprehensive reviews that can offer detailed classification, analysis, and comparison of the rich existing methods. To fill this gap, we provide an overview of ML methods applied in ACL and meniscus MRI analysis between 2013 and 2024. Sixty-seven papers covering tasks such as classification, segmentation, localization, and prediction are investigated and classified from the perspective of the ML method. For conventional ML methods, we summarized four kinds of handcrafted MRI features related to the ACL and meniscus, along with corresponding ML models. Meanwhile, we categorize the deep learning methods into 11 types according to the network structures for various tasks. Based on the categorization, we further compare the main methods and analyze the critical factors for existing successful models. Current trends and future directions in this field are also elaborately discussed. Serving as a technical reference, this survey hopes to inspire researchers interested in this field in method selection and design, which ultimately advances ML in clinical applications of ACL and meniscus.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103032"},"PeriodicalIF":3.7,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Size-Encoding methods for depth information in augmented reality 增强现实中深度信息的尺寸编码方法
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-21 DOI: 10.1016/j.displa.2025.103035
Xiaozhou Zhou , Jiayu Ji , Jing Zhang , Yuqing Zhou
{"title":"Size-Encoding methods for depth information in augmented reality","authors":"Xiaozhou Zhou ,&nbsp;Jiayu Ji ,&nbsp;Jing Zhang ,&nbsp;Yuqing Zhou","doi":"10.1016/j.displa.2025.103035","DOIUrl":"10.1016/j.displa.2025.103035","url":null,"abstract":"<div><div>In augmented reality (AR), bias in depth perception is a typical problem faced by users. Size encoding can provide useful depth cues to help AR users establish a connection between the augmented layer and the real world. This study aims to develop size encoding rules for augmented labelling of indoor spatial objects designed to represent depth layers on the basis of experimental research on human depth and size perception. In this study, we conducted two psychophysical experiments using the HoloLens 2. The depth perception stratification experiment provided a fitted relationship between subjective depth perception and physical depth, along with just noticeable difference (JND) values for depth perception. The size encoding experiment calculates the JND values for size differences. On the basis of the experimental results, we developed a list of depth information layers and size encoding layers for indoor spaces and proposed an augmented labelling size encoding method that effectively represents depth information.</div><div>The method can be applied to augmented information labelling in indoor spaces, and it has value for application in extended reality with overlay displays, which can be extended to virtual reality, augmented reality, and mixed reality devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103035"},"PeriodicalIF":3.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Staggered HDR video reconstruction with a real-world benchmark dataset for night scenes 交错HDR视频重建与现实世界的基准数据集的夜景
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-21 DOI: 10.1016/j.displa.2025.103029
Huanjing Yue , Changan He , Longhan Wang , Biting Yu , Xuanwu Yin , Zhenyu Zhou , Jingyu Yang
{"title":"Staggered HDR video reconstruction with a real-world benchmark dataset for night scenes","authors":"Huanjing Yue ,&nbsp;Changan He ,&nbsp;Longhan Wang ,&nbsp;Biting Yu ,&nbsp;Xuanwu Yin ,&nbsp;Zhenyu Zhou ,&nbsp;Jingyu Yang","doi":"10.1016/j.displa.2025.103029","DOIUrl":"10.1016/j.displa.2025.103029","url":null,"abstract":"<div><div>Capturing night scenes with full visibility is attractive. Due to the limited dynamic range of camera sensors, we cannot record details for both light-source regions and dark regions. A practical solution is utilizing multi-exposure fusion to get high dynamic range (HDR) results. However, the lack of real-world NightHDR dataset hinders the development of deep learning based NightHDR video reconstruction. To solve this problem, we first construct a real-world NightHDR video dataset, which contains 57 LDR-HDR video pairs captured under night scenes in both raw and sRGB formats, where the LDR frame includes short and long exposures. Different from previous alternating exposure based or frame based HDR video reconstruction, we turn to staggered HDR reconstruction, which is more applicable in real scenarios. Correspondingly, we propose an efficient NightHDRNet, which contains single-exposure enhancement (stage I), two-exposure fusion (stage II), and two stage selective fusion modules. In this way, our network can improve the dynamic range and reduce ghosting artifacts. Extensive experiments show that our approach outperforms state-of-the-art methods qualitatively and quantitatively. <em>We will release our dataset and code after the acceptance of this work.</em></div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103029"},"PeriodicalIF":3.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A theoretical model for evaluation of non-visual effects of lighting based on human performance: Comprehensive research ideas 基于人的表现评价灯光非视觉效果的理论模型:综合研究思路
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-18 DOI: 10.1016/j.displa.2025.103038
Xian Zhang , Lin Ma , Jing Chen , Jinchi Fu , Jingluan Wang , Yao Wang , Mengya Zhu , Mingjiu Yu , Dengkai Chen
{"title":"A theoretical model for evaluation of non-visual effects of lighting based on human performance: Comprehensive research ideas","authors":"Xian Zhang ,&nbsp;Lin Ma ,&nbsp;Jing Chen ,&nbsp;Jinchi Fu ,&nbsp;Jingluan Wang ,&nbsp;Yao Wang ,&nbsp;Mengya Zhu ,&nbsp;Mingjiu Yu ,&nbsp;Dengkai Chen","doi":"10.1016/j.displa.2025.103038","DOIUrl":"10.1016/j.displa.2025.103038","url":null,"abstract":"<div><div>Non-visual effects (NVE) refer to the influence of light passing through human intrinsically photosensitive retinal ganglion cells (ipRGCs). These effects encompass various dimensions, including circadian rhythms, mood regulation, vigilance, and work efficiency. Human performance (HP) involves psychological perception, task execution, and physiological effectiveness. To systematically investigate the interplay between NVE and HP, it is essential to establish a comprehensive evaluation framework that can also delineate a scientific and clear technical pathway for subsequent research into quantitative methodologies related to NVE. We compile the current state of research on lighting’s NVE, integrating and analyzing the influencing factors associated with these effects alongside the dimensional indices used for evaluating HP while summarizing existing quantitative approaches to studying non-visual (NV) impacts. Based on these retrospective analyses and by proposing future research trajectories, we ultimately developed a HP-driven evaluation methodology system for assessing NVE. This framework provides a theoretical foundation for forthcoming studies focused on multi-dimensional evaluation methods concerning NV influences as well as guiding future quantitative investigations into this area.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103038"},"PeriodicalIF":3.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143685186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance enhancement of LED displays through optimized light distribution 通过优化配光优化LED显示屏的性能
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-18 DOI: 10.1016/j.displa.2025.103039
Zhihong Zhao , Qi Yao , Shenfei Chen , Peiyu Wu , Yuhui Zhang , Wenqian Xu , Bangyong Sun , Dong Wang , Ziyi Wang , Jiawei Feng
{"title":"Performance enhancement of LED displays through optimized light distribution","authors":"Zhihong Zhao ,&nbsp;Qi Yao ,&nbsp;Shenfei Chen ,&nbsp;Peiyu Wu ,&nbsp;Yuhui Zhang ,&nbsp;Wenqian Xu ,&nbsp;Bangyong Sun ,&nbsp;Dong Wang ,&nbsp;Ziyi Wang ,&nbsp;Jiawei Feng","doi":"10.1016/j.displa.2025.103039","DOIUrl":"10.1016/j.displa.2025.103039","url":null,"abstract":"<div><div>The improvement in light-emitting diode (LED) performance has facilitated its widespread application in areas such as urban media screens and electronic displays. However, most commercial LEDs use a circularly symmetric radiation pattern, which leads to issues like high energy consumption, significant spill light, and privacy concerns during practical applications. This study investigates the impact of light distribution in LED displays on visual performance and energy consumption. Through software simulations and experimental measurements, we established a quantitative model linking beam angle with the maximum viewing angle and energy consumption of displays. The results show that a display with a 60° beam angle consumes only half the energy of a display with a 120° beam angle, while achieving 89% of the latter’s maximum viewing angle. These findings highlight the need to balance visual performance and energy consumption based on specific application scenarios. This research provides a new technical approach for optimizing the visual performance and energy efficiency of LED displays, particularly valuable for reducing high energy consumption and severe light pollution in outdoor LED displays.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103039"},"PeriodicalIF":3.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EDRNet: An attention-based model for multi-type tumor and polyp segmentation in medical imaging EDRNet:医学影像中基于注意力的多类型肿瘤和息肉分割模型
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-18 DOI: 10.1016/j.displa.2025.103031
Syed Wajahat Ali , Adeel Feroz Mirza , Muhammad Usman , Zaiyue Yang
{"title":"EDRNet: An attention-based model for multi-type tumor and polyp segmentation in medical imaging","authors":"Syed Wajahat Ali ,&nbsp;Adeel Feroz Mirza ,&nbsp;Muhammad Usman ,&nbsp;Zaiyue Yang","doi":"10.1016/j.displa.2025.103031","DOIUrl":"10.1016/j.displa.2025.103031","url":null,"abstract":"<div><div>In the pursuit of advancing medical image segmentation, this study introduces a novel neural network, Efficient Deep Residual Network (EDRNet). Our approach is designed to handle different types of tumors, encompassing MRI brain tumor segmentation, three breast tumor ultrasound segmentation tasks, and cholorectal polyp segmentation from the Kvasir dataset. EDRNet incorporates advanced architectural features such as Enhanced Residual Dilated Blocks and a Deep Residual Network (DRNet), alongside an EfficientNet-based encoder, optimizing the extraction and processing of image features. A key innovation in our model is the integration of Spatial Channel Fusion Attention, with the DRNet which combines global and local feature extractors using an attention feature fusion module. Our modified Attention Feature Fusion Module (AFFM) plays a crucial role in integrating local features (lc), fused features (fuse), and global features (gf) to produce a rich, multi-scale representation for improved segmentation performance at decoder part. Furthermore, we have used the transfer learning based approach to train single model for the multiple cancer datasets with combine loss functions to efficiently train the neural networks. Specifically, in the MRI and breast cancer datasets, EDRNet has demonstrated good capability in properly segmenting all types of brain tumors with precise details, whether located on the left or right side of the brain. This design choice significantly enhances the model’s ability to differentiate low and high level features and boundaries in medical images, crucial for accurate segmentation. Quantitative evaluations demonstrate that EDRNet sets new benchmarks on all considered datasets. Notably, improvements in Intersection over Union (IoU) were recorded as a 9 % increase for the MRI brain tumor segmentation, a substantial 4% enhancement for breast cancer segmentation datasets, and a 1.5% improvement in cholorectal polyp segmentation on the Kvasir dataset. These results underline the efficacy of EDRNet in boosting accuracy and sensitivity, confirming its state-of-the-art performance in medical image segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103031"},"PeriodicalIF":3.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSFFNet: Multi-stream feature fusion network for underwater image enhancement 用于水下图像增强的多流特征融合网络
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-18 DOI: 10.1016/j.displa.2025.103023
Peng Lin, Zihao Fan, Yafei Wang, Xudong Sun, Yuán-Ruì Yáng, Xianping Fu
{"title":"MSFFNet: Multi-stream feature fusion network for underwater image enhancement","authors":"Peng Lin,&nbsp;Zihao Fan,&nbsp;Yafei Wang,&nbsp;Xudong Sun,&nbsp;Yuán-Ruì Yáng,&nbsp;Xianping Fu","doi":"10.1016/j.displa.2025.103023","DOIUrl":"10.1016/j.displa.2025.103023","url":null,"abstract":"<div><div>Deep learning-based image processing methods have achieved remarkable success in improving the quality of underwater images. These methods usually extract features from different receptive fields through downsampling operations, and then enhance underwater images through upsampling operations. However, these operations of downsampling and upsampling inevitably disrupt the relations of neighboring pixels in raw underwater images, leading to the loss of image details. Given this, a multi-stream feature fusion network, dubbed MSFFNet, is proposed to enrich details, correct colors, and enhance contrast of degraded underwater images. In MSFFNet, the multi-stream feature estimation block is carefully constructed, which separately takes original resolution feature maps and low-resolution feature maps as inputs. The multi-stream feature estimation block proficiently preserves the details information of the original underwater image while extracting high-level features. Besides, a coordinate residual block is designed to emphasize valuable features and suppress noises based on position knowledge. A local–global feature fusion block is presented for selectively fusing the complementary multi-scale features. Finally, extensive comparative experiments on real underwater images and synthetic underwater images demonstrate that the proposed MSFFNet has superior performance on underwater image enhancement tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103023"},"PeriodicalIF":3.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MS2CAM: Multi-scale self-cross-attention mechanism-based MRI super-resolution MS2CAM:基于多尺度自交叉注意机制的MRI超分辨率
IF 3.7 2区 工程技术
Displays Pub Date : 2025-03-17 DOI: 10.1016/j.displa.2025.103033
Jinbin Hu , Yanding Qin , Hongpeng Wang , Jianda Han
{"title":"MS2CAM: Multi-scale self-cross-attention mechanism-based MRI super-resolution","authors":"Jinbin Hu ,&nbsp;Yanding Qin ,&nbsp;Hongpeng Wang ,&nbsp;Jianda Han","doi":"10.1016/j.displa.2025.103033","DOIUrl":"10.1016/j.displa.2025.103033","url":null,"abstract":"<div><div>In magnetic resonance imaging (MRI), it is challenging to achieve both real-time imaging and high image quality due to its unique imaging modality. Low-resolution (LR) often accompanies real-time MRI, making super-resolution (SR) techniques essential for enhancing image quality in real-time MRI. This paper proposes a <strong>M</strong>ulti-<strong>S</strong>cale <strong>S</strong>elf-<strong>C</strong>ross-<strong>A</strong>ttention <strong>M</strong>echanism (MS<sup>2</sup>CAM) for MRI SR tasks, where concrete and abstract features are effectively fused to improve SR performance. Our model demonstrates consistent performance improvements of 1–2% over state-of-the-art methods across various degradation scenarios. Visual results also reveal finer detail restoration, verifying MS<sup>2</sup>CAM’s effectiveness. Extensive experimental results confirm that MS<sup>2</sup>CAM achieves superior quantitative and visual performance in MRI SR tasks, establishing it as a leading solution in this domain.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103033"},"PeriodicalIF":3.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信