{"title":"Reconstruction Reordered Intra Block Copy for Screen Content Coding","authors":"Zhipin Deng;Kai Zhang;Li Zhang","doi":"10.1109/TIP.2025.3579988","DOIUrl":"10.1109/TIP.2025.3579988","url":null,"abstract":"In the pursuit of achieving further coding gains beyond the versatile video coding (VVC) standard, the enhanced compression model (ECM) has been initiated by the Joint Video Exploration Team (JVET) with the aim of developing next generation video coding techniques. In ECM, novel coding tools are studied to improve the coding efficiency for both camera-captured content and screen content. Intra block copy (IBC) has been included as a fundamental coding tool in both VVC and ECM, yielding significant improvement in compression efficiency for screen content. This paper presents a method of reconstruction reordered IBC (RR-IBC) to further improve the compression efficiency for screen content, by taking advantage of the symmetry property inherent in screen content sequences. The reconstruction block is flipped horizontally or vertically to restore the characteristics of samples in the original block. A flip-aware adjustment is performed to regulate block vector candidates of the RR-IBC block according to the types of symmetry. Similarly, the reference template of the template-based reordering method for the RR-IBC block is adjusted accordingly to accommodate the geometry property. A motion constraint is applied to restrict the block vector of an RR-IBC coded block to a single direction displacement perpendicular to the flip axis. An RR-IBC flip mode index is signalled to specify how to flip the reconstruction block. Experimental results show that the proposed RR-IBC can provide an average Bjontegaard delta rate (BD-rate) saving of 1.61%/1.79%/1.76% and 3.90%/3.63%/3.63% on Y/Cb/Cr components for class F and class TGM sequences, respectively, with a negligible change on the runtime, compared with ECM-5.0 in all intra configurations. RR-IBC has been adopted into ECM.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4011-4025"},"PeriodicalIF":0.0,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felipe Guzmán;Nelson Díaz;Bastián Romero;Esteban Vera
{"title":"Scalable Coding for High-Resolution, High-Compression Ratio Snapshot Compressive Video","authors":"Felipe Guzmán;Nelson Díaz;Bastián Romero;Esteban Vera","doi":"10.1109/TIP.2025.3579208","DOIUrl":"10.1109/TIP.2025.3579208","url":null,"abstract":"High-speed cameras are crucial for capturing fast events beyond human perception, although challenges in terms of storage, bandwidth, and cost hinder their widespread use. As an alternative, snapshot compressive video can overcome these challenges by exploiting the principles of compressed sensing to capture compressive projections of dynamic scenes into a single image, which is then used to recover the underlying video by solving an ill-posed inverse problem. However, scalability in terms of spatial and temporal resolution is limited for both acquisition and reconstruction. In this work, we leverage time-division multiplexing to design a versatile scalable coded aperture approach that allows unseen spatio-temporal scalability for snapshot compressive video, offering on-the-fly, high-compression ratios with minimal computational burden and low memory requirements. The proposed sampling scheme is universal and compatible with any compressive temporal imaging sampling matrices and reconstruction algorithm aimed for low spatio-temporal resolutions. Simulations validated with a series of experimental results confirm that we can compress up to 512 frames of 2K <inline-formula> <tex-math>$times 2$ </tex-math></inline-formula>K resolution into a single snapshot, equivalent to a compression ratio of 0.2%, delivering an overall reconstruction quality exceeding 30 dB in PSNR for conventional reconstruction algorithms, and often surpassing 36 dB when utilizing the latest state-of-the-art deep learning reconstruction algorithms. The results presented in this paper can be reproduced in the following GitHub repository: <uri>https://github.com/FOGuzman/All-scalable-CACTI</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3960-3970"},"PeriodicalIF":0.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11040128","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144328803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dandan Zhu;Kaiwei Zhang;Xiongkuo Min;Guangtao Zhai;Xiaokang Yang
{"title":"From Haziness to Clarity: A Novel Iterative Memory-Retrospective Emergence Model for Omnidirectional Image Saliency Prediction","authors":"Dandan Zhu;Kaiwei Zhang;Xiongkuo Min;Guangtao Zhai;Xiaokang Yang","doi":"10.1109/TIP.2025.3578264","DOIUrl":"10.1109/TIP.2025.3578264","url":null,"abstract":"To achieve saliency prediction in omnidirectional images (ODIs), the majority of prior works typically adopt the convolutional neural networks (CNNs)-based saliency models to extract semantic features to predict prominent regions in ODIs. Albeit achieving substantially performance gains, these works all employed purely visual computing paradigms and ignore to explore the nature of human visual attention mechanisms. In other words, existing saliency prediction works for ODIs are insufficient to capture the biological characteristics of the visual attention mechanism in the human brain. To establish a more explicit link between saliency prediction performance and brain-like visual attention mechanism, we simulate the mechanism of human retrospective memory in neuropsychology and propose IMRE model, a novel iterative memory-retrospective emergence model can predict and infer the salient features by recalling previously learned information. In IMRE model, we introduce four key modules to simulate the visual attention mechanism for predicting human fixations in the human brain. Firstly, the visual stimulus response module is designed to effectively extract semantic features and capture the intricate relationship between these features, acting as the human visual cortex. Secondly, the retrospective integration module serves to distill valuable information from a fuzzy memory ensemble, resembling the role of the basal ganglia in the neural system. Thirdly, the memory bank module explicitly records and stores subconscious response information and learned knowledge, acting like the hippocampus in neural system. Lastly, the prospective inference module accurately infers saliency maps from the refined useful information, resembling the role of the prefrontal cortex. During prediction, we utilize the introduced memory bank to retrieve and recall previously learned information, which simulates the process of memory emergence from haziness to clarity. Such a process aligns with the retrospective memory mechanism of the human brain. To validate the superiority of the proposed model in ODIs saliency prediction tasks, we conduct extensive experiments on two benchmark datasets. Experiments show impressive performances that IMRE model outperforms other state-of-the-art methods across all benchmark datasets. Importantly, experiments also highlight the IMRE model’s ability to trace back to specific instances during prediction, thereby reducing model inference costs and enhancing interpretability.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3944-3959"},"PeriodicalIF":0.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144328769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial Frequency Modulation Network for Efficient Image Dehazing","authors":"Hao Shen;Henghui Ding;Yulun Zhang;Zhong-Qiu Zhao;Xudong Jiang","doi":"10.1109/TIP.2025.3579148","DOIUrl":"10.1109/TIP.2025.3579148","url":null,"abstract":"Currently, two main research lines in efficient context modeling for image dehazing are tailoring effective feature modulation mechanisms and utilizing the Fourier transform more precisely. The former is usually based on self-scale features that ignore complementary cross-scale/level features, and the latter tends to overlook regions with pronounced haze degradation and intricate structures. This paper introduces a novel spatial and frequency modulation perspective to synergistically investigate contextual feature modeling for efficient image dehazing. Specifically, we delicately develop a Spatial Frequency Modulator (SFM) equipped with a Cross-Scale Modulator (CSM) and Frequency Modulator (FM) to implement intra-block feature modulation. The CSM progressively aggregates hierarchical features across different scales, employing them for spatial self-modulation, and the FM subsequently adopts a dual-branch design to focus more on the crucial areas with severe haze and complex structures for reconstruction. Further, we propose a Cross-Level Modulator (CLM) to facilitate inter-block feature mutual modulation, enhancing seamless interaction between features at different depths and layers. Integrating the above-developed modules into the U-Net architecture, we construct a two-stage spatial frequency modulation network (SFMN). Extensive quantitative and qualitative evaluations showcase the superior performance and efficiency of the proposed SFMN over recent state-of-the-art image dehazing methods. The source code can be found in <uri>https://github.com/it-hao/SFMN.</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3982-3996"},"PeriodicalIF":0.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144328600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RECISTSurv: Hybrid Multi-Task Transformer for Hepatocellular Carcinoma Response and Survival Evaluation","authors":"Rushi Jiao;Qiuping Liu;Yao Zhang;Bangzheng Pu;Bingsen Xue;Yi Cheng;Kailan Yang;Xisheng Liu;Jinrong Qu;Cheng Jin;Ya Zhang;Yanfeng Wang;Yu-Dong Zhang","doi":"10.1109/TIP.2025.3579200","DOIUrl":"10.1109/TIP.2025.3579200","url":null,"abstract":"Transarterial Chemoembolization (TACE) is a widely applied alternative treatment for patients with hepatocellular carcinoma who are not eligible for liver resection or transplantation. However, the clinical outcomes after TACE are highly heterogeneous. There remains an urgent need for effective and efficient strategies to accurately assess tumor response and predict long-term outcomes using longitudinal and multi-center datasets. To address this challenge, we here introduce RECIST<sup>Surv</sup>, a novel response-driven Transformer model that integrates multi-task learning with a response-driven co-attention mechanism to simultaneously perform liver and tumor segmentation, predict tumor response to TACE, and estimate overall survival based on longitudinal Computed Tomography (CT) imaging. The proposed Response-driven Co-attention layer models the interactions between pre-TACE and post-TACE features guided by the treatment response embedding. This design enables the model to capture complex relationships between imaging features, treatment response, and survival outcomes, thereby enhancing both prediction accuracy and interpretability. In a multi-center validation study, RECIST<sup>Surv</sup>-predicted prognosis has demonstrated superior precision than state-of-the-art methods with C-indexes ranging from 0.595 to 0.780. Furthermore, when integrated with multi-modal data, RECIST<sup>Surv</sup> has emerged as an independent prognostic factor in all three validation cohorts, with hazard ratio (HR) ranging from 1.693 to 20.7 (<inline-formula> <tex-math>$text {P = 0.001-0.042}$ </tex-math></inline-formula>). Our results highlight the potential of RECIST<sup>Surv</sup> as a powerful tool for personalized treatment planning and outcome prediction in hepatocellular carcinoma patients undergoing TACE. The experimental code is made publicly available at <uri>https://github.com/rushier/RECISTSurv</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3873-3888"},"PeriodicalIF":0.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144319889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast 3D Room Layout Estimation Based on Compact High-Level Representation","authors":"Weidong Zhang;Yulei Qiao;Ying Liu;Ran Song;Wei Zhang","doi":"10.1109/TIP.2025.3578785","DOIUrl":"10.1109/TIP.2025.3578785","url":null,"abstract":"3D room layout estimation aims to reconstruct the holistic 3D structure from an indoor RGB image. For most of the deep learning-based methods, layout inference is guided by a kind of learned 2D mid-level representation such as pixel-wise surface labels. However, learning such high-resolution 2D representation might suffer from information redundancy and memory consumption, and will increase the runtime of estimation and deployment cost for practical applications. In this paper, we attempt to learn a compact high-level representation with only 29 real numbers for estimating the 3D layout using general regression networks. The learned compact high-level representation contains three components: instance-wise plane parameters, camera intrinsic parameters, and plane location indicators. With the learned representation, the inverse depth map of each plane can be calculated to reconstruct the 3D layout. We further design a set of order-agnostic loss functions to restrict the produced inverse depth maps, with which the model can be trained with either weak 2D layout labels or full 3D layout supervision. Moreover, by jointly learning the plane parameters and locations, the model is benefited from 3D reasoning. Experimental results show that our method is much faster than the existing layout estimation methods and obtains competitive performance on benchmark datasets, showing its potential for real-time applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3930-3943"},"PeriodicalIF":0.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144319891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structural Similarity-Inspired Unfolding for Lightweight Image Super-Resolution","authors":"Zhangkai Ni;Yang Zhang;Wenhan Yang;Hanli Wang;Shiqi Wang;Sam Kwong","doi":"10.1109/TIP.2025.3578753","DOIUrl":"10.1109/TIP.2025.3578753","url":null,"abstract":"Major efforts in data-driven image super-resolution (SR) primarily focus on expanding the receptive field of the model to better capture contextual information. However, these methods are typically implemented by stacking deeper networks or leveraging transformer-based attention mechanisms, which consequently increases model complexity. In contrast, model-driven methods based on the unfolding paradigm show promise in improving performance while effectively maintaining model compactness through sophisticated module design. Based on these insights, we propose a Structural Similarity-Inspired Unfolding (SSIU) method for efficient image SR. This method is designed through unfolding an SR optimization function constrained by structural similarity, aiming to combine the strengths of both data-driven and model-driven approaches. Our model operates progressively following the unfolding paradigm. Each iteration consists of multiple Mixed-Scale Gating Modules (MSGM) and an Efficient Sparse Attention Module (ESAM). The former implements comprehensive constraints on features, including a structural similarity constraint, while the latter aims to achieve sparse activation. In addition, we design a Mixture-of-Experts-based Feature Selector (MoE-FS) that fully utilizes multi-level feature information by combining features from different steps. Extensive experiments validate the efficacy and efficiency of our unfolding-inspired network. Our model outperforms current state-of-the-art models, boasting lower parameter counts and reduced memory consumption. Our code will be available at: <uri>https://github.com/eezkni/SSIU</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3861-3872"},"PeriodicalIF":0.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144319890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbo Zhao;Wei Gao;Dingquan Li;Jing Wang;Guoqing Liu
{"title":"LOD-PCAC: Level-of-Detail-Based Deep Lossless Point Cloud Attribute Compression","authors":"Wenbo Zhao;Wei Gao;Dingquan Li;Jing Wang;Guoqing Liu","doi":"10.1109/TIP.2025.3578760","DOIUrl":"10.1109/TIP.2025.3578760","url":null,"abstract":"Point cloud attribute compression is a challenging issue in efficiently compressing large volumes of attributes. Despite notable advancements in lossy point cloud compression using deep learning, progress in lossless compression remains limited. Some methods have employed octree- or voxel-based partitioning techniques derived from geometric compression, achieving success on dense point clouds. However, these voxel-based approaches struggle with sparse or unevenly distributed point clouds, leading to performance degradation. In this work, we introduce a novel framework for learning-based lossless point cloud attribute compression, named <italic>LOD-PCAC</i>, which leverages a Level-of-Detail (LOD) structure to ensure density-robust compression. Specifically, the input point cloud is divided into multiple detail levels, and vertices from these levels are selected to construct a <italic>Reference Set</i> as context, which effectively captures multi-level information. Then we propose the <italic>Bit-level Residual Coder</i> for efficient attribute compression. Instead of directly compressing attributes, our method first predicts attribute values and organizes the residual bits into a <italic>Bit Matrix</i> as another context, simplifying predictions and fully exploiting channel correlations. Finally, a neural network with specialized encoders processes the context to estimate the probability of each residual bit. Experimental results demonstrate that the proposed method outperforms both traditional and learning-based approaches across various point clouds, exhibiting strong generalization across datasets and robustness to varying densities.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3918-3929"},"PeriodicalIF":0.0,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144311388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mai Xu;Xiancheng Sun;Shengxi Li;Lai Jiang;Jingyuan Xia;Xin Deng
{"title":"Spherical Patch Generative Adversarial Net for Unconditional Panoramic Image Generation","authors":"Mai Xu;Xiancheng Sun;Shengxi Li;Lai Jiang;Jingyuan Xia;Xin Deng","doi":"10.1109/TIP.2025.3578257","DOIUrl":"10.1109/TIP.2025.3578257","url":null,"abstract":"Recent advancements in virtual reality (VR) and augmented reality (AR) have popularised the emerging panoramic content for the immersive visual experience. The difficulty in acquisition and display of 360° format further highlights the necessity of unconditional panoramic image generation. Existing methods essentially generate planar images mapped from panoramic images, and fail to address the deformation and closed-loop characteristics when inverted back to the panoramic images. Thus leading to the generation of pseudo-panoramic content. This paper aims to directly generate spherical content, in a patch-by-patch style; besides computation friendly, this promises the anywhere continuity on the panoramic image and proper accommodation of panoramic deformation. More specifically, we first propose a novel spherical patch convolution (SPConv) that operates on the local spherical patch, which naturally addresses the deformation of panoramic content. We then propose our spherical patch generative adversarial net (SP-GAN) that consists of spherical local embedding (SLE) and spherical content synthesiser (SCS) modules, which seamlessly incorporate our SPConv so as to generate continuous panoramic patches. To the best of our knowledge, the proposed SP-GAN is the first successful attempt to accommodate the spherical distortion for closed-loop panoramic image generation in a patch-by-patch manner. The experimental results, with human-rated evaluations, have verified the consistently superior performances for unconditional panoramic image generation, from the perspectives of generation quality, computational memory, and generalisation to various resolutions. Codes are publicly available at <uri>https://github.com/chronos123/SP-GAN</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3833-3848"},"PeriodicalIF":0.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144304538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reference-Based Iterative Interaction With P2-Matching for Stereo Image Super-Resolution","authors":"Runmin Cong;Rongxin Liao;Feng Li;Ronghui Sheng;Huihui Bai;Renjie Wan;Sam Kwong;Wei Zhang","doi":"10.1109/TIP.2025.3577538","DOIUrl":"10.1109/TIP.2025.3577538","url":null,"abstract":"Stereo Image Super-Resolution (SSR) holds great promise in improving the quality of stereo images by exploiting the complementary information between left and right views. Most SSR methods primarily focus on the inter-view correspondences in low-resolution (LR) space. The potential of referencing a high-quality SR image of one view benefits the SR for the other is often overlooked, while those with abundant textures contribute to accurate correspondences. Therefore, we propose Reference-based Iterative Interaction (RIISSR), which utilizes reference-based iterative pixel-wise and patch-wise matching, dubbed <inline-formula> <tex-math>$P^{2}$ </tex-math></inline-formula>-Matching, to establish cross-view and cross-resolution correspondences for SSR. Specifically, we first design the information perception block (IPB) cascaded in parallel to extract hierarchical contextualized features for different views. Pixel-wise matching is embedded between two parallel IPBs to exploit cross-view interaction in LR space. Iterative patch-wise matching is then executed by utilizing the SR stereo pair as another mutual reference, capitalizing on the cross-scale patch recurrence property to learn high-resolution (HR) correspondences for SSR performance. Moreover, we introduce the supervised side-out modulator (SSOM) to re-weight local intra-view features and produce intermediate SR images, which seamlessly bridge two matching mechanisms. Experimental results demonstrate the superiority of RIISSR against existing state-of-the-art methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3779-3789"},"PeriodicalIF":0.0,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144278280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}