Displays最新文献

筛选
英文 中文
IEDHTrans: A hybrid network with interactive encoders and differential hierarchical transformers for multi-phase breast cancer segmentation 基于交互式编码器和差分分层变压器的混合网络用于多相乳腺癌分割
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-02 DOI: 10.1016/j.displa.2025.103193
Yuexin Wang , Gesheng Song , Jian Zhang , Fangqing Wang , Haixing Cheng , Yudan Zhao , Peng Zhou , Xu Qiao , Wei Chen
{"title":"IEDHTrans: A hybrid network with interactive encoders and differential hierarchical transformers for multi-phase breast cancer segmentation","authors":"Yuexin Wang ,&nbsp;Gesheng Song ,&nbsp;Jian Zhang ,&nbsp;Fangqing Wang ,&nbsp;Haixing Cheng ,&nbsp;Yudan Zhao ,&nbsp;Peng Zhou ,&nbsp;Xu Qiao ,&nbsp;Wei Chen","doi":"10.1016/j.displa.2025.103193","DOIUrl":"10.1016/j.displa.2025.103193","url":null,"abstract":"<div><div>Breast cancer, a prevalent malignancy and leading cause of global mortality in women, requires precise tumor assessment. Although multi-phase dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) offers high sensitivity for tumor evaluation and treatment monitoring, precise primary tumor segmentation remains challenging, limiting advancements in personalized medicine. Existing segmentation methods struggle with multi-sequence DCE-MRI. Consequently, we propose IEDHTrans, a novel hybrid network leveraging multi-phase DCE-MRI information to enhance breast tumor segmentation. This network comprises an interactive encoders module for accurate multi-phase feature extraction of breast tumor features, a differential hierarchical transformer module to establish global long-distance dependencies on multi-resolution feature graphs, and a convolutional neural network decoders module for feature upsampling. Our method’s effectiveness is validated through quantitative and qualitative experiments on the public MAMA-MIA dataset, the PLHN dataset, and our in-house clinical dataset. This approach consistently outperforms other advanced methods. We achieved dice coefficients of 81.22%, 77.85% and 81.83% on the MAMA-MIA, PLHN dataset and in-house clinical datasets, respectively. The source code and in-house clinical dataset are accessible at <span><span>https://github.com/WYX-gh/IEDHTrans</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103193"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GEKD: A graph-enhanced knowledge discriminator for universal improvements in medical image synthesis GEKD:一个图形增强的知识鉴别器,用于医学图像合成的普遍改进
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-02 DOI: 10.1016/j.displa.2025.103197
Chujie Zhang , Jihong Hu , Yinhao Li , Lanfen Lin , Yen-Wei Chen
{"title":"GEKD: A graph-enhanced knowledge discriminator for universal improvements in medical image synthesis","authors":"Chujie Zhang ,&nbsp;Jihong Hu ,&nbsp;Yinhao Li ,&nbsp;Lanfen Lin ,&nbsp;Yen-Wei Chen","doi":"10.1016/j.displa.2025.103197","DOIUrl":"10.1016/j.displa.2025.103197","url":null,"abstract":"<div><div>Multimodal medical imaging is crucial for comprehensive diagnosis, yet acquiring complete multimodal datasets remains challenging due to economic constraints and technical limitations. Currently, Generative Adversarial Networks (GANs) and Diffusion Models (DMs) represent the two predominant paradigms for medical image synthesis, but they share a critical limitation: convolutional neural networks (CNNs) tend to optimize pixel intensities while neglecting anatomical structural integrity. Although attention mechanisms have been introduced to improve these models, existing methods fail to adequately account for relationships between anatomical regions within images and structural correspondences across different modalities, resulting in inaccurate or incomplete representation of critical regions. This paper presents the Graph-Enhanced Knowledge Discriminator (GEKD), a plug-and-play contextual prior learning module that explicitly models both intra-image and inter-image structural relationships to guide generators toward maintaining anatomical consistency. Inspired by radiology residency training programs, GEKD simulates the cognitive process of medical experts analyzing multimodal images by constructing structural graphs that capture important associations between anatomical regions. Our integration of GEKD with multiple state-of-the-art medical image synthesis methods across four datasets demonstrates that this approach significantly enhances the structural accuracy and clinical relevance of synthesized images, shifting the paradigm from ‘where to look’ to ‘understanding structural relationships.’ By modeling both local (intra-image) and global (inter-image) structural dependencies, GEKD directly addresses the fundamental limitation of existing models that prioritize pixel-level fidelity over structural integrity, providing a broadly applicable solution for diverse medical imaging scenarios.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103197"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verification and modification of the display design process using fuzzy linguistic patterns 验证和修改使用模糊语言模式的显示设计过程
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-01 DOI: 10.1016/j.displa.2025.103194
Atsuo Murata , Toshihisa Doi , Waldemar Karwowski
{"title":"Verification and modification of the display design process using fuzzy linguistic patterns","authors":"Atsuo Murata ,&nbsp;Toshihisa Doi ,&nbsp;Waldemar Karwowski","doi":"10.1016/j.displa.2025.103194","DOIUrl":"10.1016/j.displa.2025.103194","url":null,"abstract":"<div><div>Evaluating graphical user interfaces (GUIs) without conducting usability testing reduces the time and cost associated with GUI usability evaluation. Although theoretical frameworks using fuzzy linguistic patterns for GUI evaluation have been proposed, their empirical validity has not been verified. In addition, the specific forms of membership functions needed to evaluate different GUIs have not been determined. This study aims to empirically validate the GUI evaluation framework based on fuzzy linguistic patterns. First, we identified the necessary membership functions (Out of ten, six were nonlinear functions) for evaluating GUIs using a psychological evaluation to determine the approximate shape of membership function in each linguistic pattern. Then, we obtained evaluation scores for GUI prototypes using these fuzzy patterns. Laboratory experiments were conducted to evaluate various GUIs and to validate the proposed evaluation method. The evaluation metrics included task completion time, percentage correct, fixation frequency, fixation duration, and subjective usability ratings. The results demonstrated that the proposed framework (weighting the contribution of each linguistic pattern to usability by multiple regression analysis) improved prediction accuracy of usability more than the original method with equal weight and may also be applicable to more complicated displays.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103194"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pathological imaging and clinical features of inflammatory breast cancer: Development of diagnostic and prognostic models 炎性乳腺癌的病理影像和临床特征:诊断和预后模型的发展
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-30 DOI: 10.1016/j.displa.2025.103204
Mengmeng Zhang , Yalan Hu , Duanyang Zhai , Tiantian Zhen , Sihua Liu , Yuxuan Gao , Yawei Shi , Huijuan Shi , Ying Lin
{"title":"Pathological imaging and clinical features of inflammatory breast cancer: Development of diagnostic and prognostic models","authors":"Mengmeng Zhang ,&nbsp;Yalan Hu ,&nbsp;Duanyang Zhai ,&nbsp;Tiantian Zhen ,&nbsp;Sihua Liu ,&nbsp;Yuxuan Gao ,&nbsp;Yawei Shi ,&nbsp;Huijuan Shi ,&nbsp;Ying Lin","doi":"10.1016/j.displa.2025.103204","DOIUrl":"10.1016/j.displa.2025.103204","url":null,"abstract":"<div><h3>Background</h3><div>Inflammatory breast cancer (IBC) is recognized as a lethal and aggressive subtype of breast cancer. However, in current clinical practice, there is a lack of precise and objective diagnostic criteria for IBC. Here, we aim to apply deep learning algorithms to integrate digitized whole slide images (WSIs) from tumor biopsies with clinical characteristics of patients with inflammatory breast cancer (IBC) and locally advanced breast cancer (LABC) to develop diagnostic and prognostic models.</div></div><div><h3>Method</h3><div>Models based solely on pathology signatures or incorporating clinicopathological characteristics were developed using a training dataset (IBC, n = 28; LABC, n = 24) and validated on a separate validation dataset (IBC, n = 7; LABC, n = 6). They were subsequently tested on a prospective testing dataset (IBC, n = 5; LABC, n = 4). Based on pathological IBC scores (PIBCS) output by the deep learning pathology model and the clinicopathological characteristics, a prognostic model for 3-year progression-free survival (PFS) was constructed using Least absolute shrinkage and selection operator(LASSO)-cox regression. Additionally, 55 cases of non-inflammatory breast cancer (non-IBC) were included for validation of the prognostic model.</div></div><div><h3>Results</h3><div>The model that integrated pathology signatures and clinicopathological characteristics demonstrated superior performance, with the area under the receiver operating characteristic curve (AUC) consistently above 0.900, in contrast to the model based solely on pathology signatures. The prognostic model, which combined PIBCS, age, and ER/PR status, showed good predictive capability for 3-year PFS of breast cancer patients, achieving AUC values of 0.858 in the training dataset and 0.841 in the test dataset.</div></div><div><h3>Conclusion</h3><div>The models have the potential to be employed in clinical settings for the unbiased diagnosis of IBC and the prediction of breast cancer prognosis.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103204"},"PeriodicalIF":3.4,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anticipatory vibrotactile cues mitigate motion sickness in car passengers: an implementation study on public roads 预期振动触觉提示减轻汽车乘客的晕动病:在公共道路上的实施研究
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-28 DOI: 10.1016/j.displa.2025.103195
Leonie Kehl , Adrian Brietzke , Rebecca Pham Xuan , Heiko Hecht
{"title":"Anticipatory vibrotactile cues mitigate motion sickness in car passengers: an implementation study on public roads","authors":"Leonie Kehl ,&nbsp;Adrian Brietzke ,&nbsp;Rebecca Pham Xuan ,&nbsp;Heiko Hecht","doi":"10.1016/j.displa.2025.103195","DOIUrl":"10.1016/j.displa.2025.103195","url":null,"abstract":"<div><div>Previous research shows that vibrotactile displays can be used to communicate upcoming lateral and longitudinal vehicle accelerations without significantly distracting passengers from their non-driving-related tasks. This unobtrusive transmission of information, presumably, allows passengers to physically adjust their posture in anticipation of upcoming vehicle maneuvers, even when they are visually distracted. This is particularly important because car sickness—characterized by symptoms such as nausea, dizziness, and malaise—often occurs when passengers are unable to anticipate vehicle movements in time. Given these considerations, the present study aimed to evaluate a practical implementation of anticipatory vibrotactile cues for everyday use in real road traffic. Building on previous findings from lab and test track studies showing the effectiveness of anticipatory vibrotactile information in mitigating car sickness, we tested a modified car seat: It was designed as a peripheral vibrotactile display to deliver anticipatory signals in a way that is suitable for real-world driving conditions. In a counterbalanced within-participant design, 39 participants completed two 30-minute rides on public roads while watching a video and rating their current motion sickness level every minute, using the Fast Motion Sickness Scale. In the intervention condition, participants received anticipatory vibrotactile cues through actuators hidden in the seat approximately 0.7 s before changes in vehicle dynamics. In the control condition they did not receive any cues. The results not only confirm that anticipatory vibrotactile cues for upcoming right and left turns as well as acceleration and deceleration events significantly mitigate car sickness, but also demonstrate that vibrating actuators integrated into the car seat are well suited for transmitting such anticipatory cues.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103195"},"PeriodicalIF":3.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-DFW: An improved detection method for printed circuit board surface defects based on YOLOv8 YOLO-DFW:基于YOLOv8改进的印刷电路板表面缺陷检测方法
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-28 DOI: 10.1016/j.displa.2025.103201
Yuhang Zhou , Xuemei Xu , Wenyuan Fan , ZhaoHui Jiang
{"title":"YOLO-DFW: An improved detection method for printed circuit board surface defects based on YOLOv8","authors":"Yuhang Zhou ,&nbsp;Xuemei Xu ,&nbsp;Wenyuan Fan ,&nbsp;ZhaoHui Jiang","doi":"10.1016/j.displa.2025.103201","DOIUrl":"10.1016/j.displa.2025.103201","url":null,"abstract":"<div><div>High-precision defect detection on printed circuit board (PCB) plays a crucial role in ensuring the productivity and safety of electronic products. However, traditional defect detection methods are difficult to meet the demand for detecting tiny targets and difficult samples in complex environments. To address this challenge, we proposed an improved detection algorithm, YOLO-DFW, based on YOLOv8 network. In our study, the DynamicConv (DC) improves the C2f module, which enhances the model’s ability to express different features through adaptive weighted convolution. We propose the Feature Focused Pyramid Network (FFPN) by reconstructing the original neck structure to enhance the model’s multi-scale feature fusion capability through cross-scale feature fusion, and the Context Enhancement Module (CEM) is introduced into FFPN to expand the model’s receptive field. What’s more, a new loss function named WMN-loss is proposed to make the model pay more attention to the difficult-to-classify and small bounding boxes by using a non-uniform loss allocation strategy. Through training and testing on the PCB surface defect dataset, our method achieves improvements of 2.2%, 2.4%, 2.6%, and 4.2% in precision, recall, mAP<sub>50</sub> (mean average precision), and mAP<sub>50:95</sub>, respectively, compared with the baseline model. Extensive experiments demonstrate the superiority of our method for PCB surface defect detection. Furthermore, comparison experiments on DeepPCB and NUE-DET datasets verify the feasibility and generalization of the YOLO-DFW detection algorithm.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103201"},"PeriodicalIF":3.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale subtraction and attention-guided network for RGB-D indoor scene parsing RGB-D室内场景分析的多尺度减法和注意力引导网络
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-25 DOI: 10.1016/j.displa.2025.103188
Wen Xie, Heng Liu
{"title":"Multi-scale subtraction and attention-guided network for RGB-D indoor scene parsing","authors":"Wen Xie,&nbsp;Heng Liu","doi":"10.1016/j.displa.2025.103188","DOIUrl":"10.1016/j.displa.2025.103188","url":null,"abstract":"<div><div>RGB-D scene parsing is a fundamental task in computer vision. However, the lower quality of depth images often leads to less accurate feature representations in the depth branch. Additionally, existing multi-level fusion methods in decoders typically use a unified module to merge RGB and depth features, disregarding the unique characteristics of hierarchical features. This indiscriminate approach can degrade segmentation accuracy. Hence, we propose a Multi-Scale Subtraction and Attention-Guided Network (MSANet). Firstly, through a cross-modal fusion module, we fuse RGB and depth features along the horizontal and vertical directions to capture positional information between the two modalities. Then, we use a Spatial Fusion Unit to adaptively enhance depth and RGB features spatially. Furthermore, we analyze the feature differences across various decoder levels and divide them into spatial and semantic branches. In the semantic branch, a high-level cross-modal fusion module extracts deep semantic information from adjacent high-level features through backpropagation, enabling RGB and depth layer reconstruction and mitigating information disparity and hierarchical differences with subtraction operations. In the spatial branch, a low-level cross-modal fusion module leverages spatial attention to enhance regional accuracy and reduce noise. MSANet achieves 52.0% mIoU on the NYU Depth v2 dataset, outperforming the baseline by 5.1%. On the more challenging SUN RGB-D dataset, MSANet achieves 49.0% mIoU. On the ScanNetV2 dataset, MSANet achieves 60.0% mIoU, further validating its effectiveness in complex indoor scenes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103188"},"PeriodicalIF":3.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-directional gate driver on array with controllable TDDM block for in-cell touch display 用于单元内触摸显示的可控TDDM块阵列双向栅极驱动器
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-25 DOI: 10.1016/j.displa.2025.103191
Guang-Ting Zheng, Po-Tsun Liu, Si-Yu Huang
{"title":"Bi-directional gate driver on array with controllable TDDM block for in-cell touch display","authors":"Guang-Ting Zheng,&nbsp;Po-Tsun Liu,&nbsp;Si-Yu Huang","doi":"10.1016/j.displa.2025.103191","DOIUrl":"10.1016/j.displa.2025.103191","url":null,"abstract":"<div><div>This article proposes a novel gate driver circuit using amorphous silicon (a-Si) with the dual function of a noise-eliminating block for in-cell touch liquid–crystal display. Compared to earlier circuit designs, the proposed circuit eliminates the requirement for an additional block to transfer the long-term DC stress of the driving TFT during the touch operation. Moreover, the circuit in this article could freely choose when to execute touch operations, thereby mitigating node degradation within the circuit and enhancing the reliability of the circuit. The proposed circuit also has the function of bidirectional scanning to achieve the effect of image inversion and enhance the flexibility of the panel. The simulation results show the circuit exhibits no distortion in its output after a 1000 µs touch period. The result indicates the effective prevention of leakage current of the proposed circuit during the touch operation. Measurement results indicate that the circuit could successfully operate at 85 °C for intervals of 2000 µs. Finally, the proposed circuit could be realized for 5.3-inch Full HD (720*RGB*1280) in-cell touch panels and showed the potential to use in large interactive touch panel application.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103191"},"PeriodicalIF":3.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features 基于多特征融合增强行人特征的人再识别
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-23 DOI: 10.1016/j.displa.2025.103187
Yushan Chen , Guofeng Zou , Zhiwei Huang , Guizhen Chen , Bin Hu
{"title":"Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features","authors":"Yushan Chen ,&nbsp;Guofeng Zou ,&nbsp;Zhiwei Huang ,&nbsp;Guizhen Chen ,&nbsp;Bin Hu","doi":"10.1016/j.displa.2025.103187","DOIUrl":"10.1016/j.displa.2025.103187","url":null,"abstract":"<div><div>Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103187"},"PeriodicalIF":3.4,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIFReID: Detail Information Fusion for Person Re-Identification DIFReID:用于人员再识别的详细信息融合
IF 3.4 2区 工程技术
Displays Pub Date : 2025-08-23 DOI: 10.1016/j.displa.2025.103189
Xuebing Bai , Jichang Guo , Jin Che
{"title":"DIFReID: Detail Information Fusion for Person Re-Identification","authors":"Xuebing Bai ,&nbsp;Jichang Guo ,&nbsp;Jin Che","doi":"10.1016/j.displa.2025.103189","DOIUrl":"10.1016/j.displa.2025.103189","url":null,"abstract":"<div><div>Person re-identification (ReID) aims to match person images across different scenes in video surveillance. Despite significant progress, existing methods often overlook the importance of multi-scale information and personal belongings, while failing to fully exploit the relationships between images and attributes. These limitations result in underutilization of detailed information, thereby constraining the completeness and discriminative power of person feature representations. To address these challenges, we propose Detail Information Fusion for Person Re-Identification (DIFReID), a novel framework that aims to enhance feature representation by effectively integrating image information and attribute information. Specifically, DIFReID incorporates a multi-scale attention module that combines multi-scale features with attention mechanisms to highlight salient regions and improve the representation of critical details. Furthermore, a refined semantic parsing module integrates semantic regions of personal belongings with human parsing results, effectively capturing personal belongings often omitted in prior approaches. In addition, a cross-modal graph convolutional network module fuses personal attributes with visual features, establishing deeper relationships between images and attributes to generate robust and discriminative representations. Extensive experiments conducted on two benchmark datasets demonstrate that DIFReID achieves state-of-the-art performance, validating its effectiveness in improving both feature completeness and discriminative capability.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103189"},"PeriodicalIF":3.4,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信