Metin Calis;Massimo Mischi;Alle-Jan van der Veen;Borbala Hunyadi
{"title":"Speckle Denoising of Dynamic Contrast- Enhanced Ultrasound Using Low-Rank Tensor Decomposition","authors":"Metin Calis;Massimo Mischi;Alle-Jan van der Veen;Borbala Hunyadi","doi":"10.1109/TMI.2025.3551660","DOIUrl":"10.1109/TMI.2025.3551660","url":null,"abstract":"Dynamic contrast-enhanced ultrasound (DCEUS) is an imaging modality for assessing micro- vascular perfusion and dispersion kinetics. However, the presence of speckle noise may hamper the quantitative analysis of the contrast kinetics. Common speckle denoising techniques based on low-rank approximations typically model the speckle noise as white Gaussian noise (WGN) after the log transformation and apply matrix-based algorithms. We address the high dimensionality of the 4D DCEUS data and apply low-rank tensor decomposition techniques to denoise speckles. Although there are many tensor decompositions that can describe low rankness, we limit our research to multilinear rank and tubal rank. We introduce a gradient-based extension of the multilinear singular value decomposition to model low multilinear rankness, assuming that the log-transformed speckle noise follows a Fisher-tippet distribution. In addition, we apply an algorithm based on tensor singular value decomposition to model low tubal rankness, assuming that the log-transformed speckle noise is WGN with sparse outliers. The effectiveness of the methods is evaluated through simulations and phantom studies. Additionally, the tensor-based algorithms’ real-world performance is assessed using DCEUS prostate recordings. Comparative analyses with existing DCEUS denoising literature are conducted, and the algorithms’ capabilities are showcased in the context of prostate cancer classification. The addition of Fisher-tippet distribution did not improve the results of tr-MLSVD in the in vivo case. However, most cancer markers are better distinguishable when using a tensor denoising technique than state-of-the-art approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2854-2867"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhuan Lu;Guanghua Tan;Bin Pu;Pak-Hei Yeung;Hang Wang;Shengli Li;Jagath C. Rajapakse;Kenli Li
{"title":"Optical Flow-Enhanced Mamba U-Net for Cardiac Phase Detection in Ultrasound Videos","authors":"Yuhuan Lu;Guanghua Tan;Bin Pu;Pak-Hei Yeung;Hang Wang;Shengli Li;Jagath C. Rajapakse;Kenli Li","doi":"10.1109/TMI.2025.3550731","DOIUrl":"10.1109/TMI.2025.3550731","url":null,"abstract":"The detection of cardiac phase in ultrasound videos, identifying end-systolic (ES) and end-diastolic (ED) frames, is a critical step in assessing cardiac function, monitoring structural changes, and diagnosing congenital heart disease. Current popular methods use recurrent neural networks to track dependencies over long sequences for cardiac phase detection, but often overlook the short-term motion of cardiac valves that sonographers rely on. In this paper, we propose a novel optical flow-enhanced Mamba U-net framework, designed to utilize both short-term motion and long-term dependencies to detect the cardiac phase in ultrasound videos. We utilize optical flow to capture the short-term motion of cardiac muscles and valves between adjacent frames, enhancing the input video. The Mamba layer is employed to track long-term dependencies across cardiac cycles. We then develop regression branches using the U-Net architecture, which integrates short-term and long-term information while extracting multi-scale features. Using this method, we can generate regression scores for each frame and identify keyframes (i.e., ES and ED frames). Additionally, we design a keyframe weighted loss function to guide the network to focus more on keyframes rather than intermediate period frames. Our method demonstrates superior performance compared to advanced baseline methods, achieving frame mismatches of 1.465 frames for ES and 0.842 frames for ED in the Fetal Echocardiogram dataset, where heart rates are higher and phase changes occur rapidly, and 2.444 frames and 2.072 frames in the publicly available adult Echonet-Dynamic dataset. Its accuracy and robustness in both fetal and adult datasets highlight its potential for clinical application.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2831-2841"},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiwei Shan;Zixin Zhang;Hao Li;Cheng-Tai Hsieh;Yirui Li;Wenhua Wu;Hesheng Wang
{"title":"UW-DNeRF: Deformable Soft Tissue Reconstruction With Uncertainty-Guided Depth Supervision and Local Information Integration","authors":"Jiwei Shan;Zixin Zhang;Hao Li;Cheng-Tai Hsieh;Yirui Li;Wenhua Wu;Hesheng Wang","doi":"10.1109/TMI.2025.3550269","DOIUrl":"10.1109/TMI.2025.3550269","url":null,"abstract":"Reconstructing deformable soft tissues from endoscopic videos is a critical yet challenging task. Leveraging depth priors, deformable implicit neural representations have seen significant advancements in this field. However, depth priors from pre-trained depth estimation models are often coarse, and inaccurate depth supervision can severely impair the performance of these neural networks. Moreover, existing methods overlook local similarities in input sequences, which restricts their effectiveness in capturing local details and tissue deformations. In this paper, we introduce UW-DNeRF, a novel approach utilizing neural radiance fields for high-quality reconstruction of deformable tissues. We propose an uncertainty-guided depth supervision strategy to mitigate the impact of inaccurate depth information. This strategy relaxes hard depth constraints and unlocks the potential of implicit neural representations. In addition, we design a local window-based information sharing scheme. This scheme employs local window and keyframe deformation networks to construct deformations with local awareness and enhances the model’s ability to capture fine details. We demonstrate the superiority of our method over state-of-the-art approaches on synthetic and in vivo endoscopic datasets. Code is available at: <uri>https://github.com/IRMVLab/UW-DNeRF</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2808-2818"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143599611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiheng Yao;Zhihan Lyu;Mufti Mahmud;Ning Zhong;Baiying Lei;Shuqiang Wang
{"title":"CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation","authors":"Weiheng Yao;Zhihan Lyu;Mufti Mahmud;Ning Zhong;Baiying Lei;Shuqiang Wang","doi":"10.1109/TMI.2025.3550206","DOIUrl":"10.1109/TMI.2025.3550206","url":null,"abstract":"Multi-modal neuroimaging analysis is crucial for a comprehensive understanding of brain function and pathology, as it allows for the integration of different imaging techniques, thus overcoming the limitations of individual modalities. However, the high costs and limited availability of certain modalities pose significant challenges. To address these issues, this paper proposes the Condition-Aligned Temporal Diffusion (CATD) framework for end-to-end cross-modal synthesis of neuroimaging, enabling the generation of functional magnetic resonance imaging (fMRI)-detected Blood Oxygen Level Dependent (BOLD) signals from more accessible Electroencephalography (EEG) signals. By constructing Conditionally Aligned Block (CAB), heterogeneous neuroimages are aligned into a latent space, achieving a unified representation that provides the foundation for cross-modal transformation in neuroimaging. The combination with the constructed Dynamic Time-Frequency Segmentation (DTFS) module also enables the use of EEG signals to improve the temporal resolution of BOLD signals, thus augmenting the capture of the dynamic details of the brain. Experimental validation demonstrates that the framework improves the accuracy of brain activity state prediction by 9.13% (reaching 69.8%), enhances the diagnostic accuracy of brain disorders by 4.10% (reaching 99.55%), effectively identifies abnormal brain regions, enhancing the temporal resolution of BOLD signals. The proposed framework establishes a new paradigm for cross-modal synthesis of neuroimaging by unifying heterogeneous neuroimaging data into a latent representation space, showing promise in medical applications such as improving Parkinson’s disease prediction and identifying abnormal brain regions.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2757-2767"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143599334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Wang;Lian Huai;Wenbin Li;Lei Qi;Xingqun Jiang;Yinghuan Shi
{"title":"WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM With Sub-Class Exploration and Prompt Affinity Mining","authors":"Haoran Wang;Lian Huai;Wenbin Li;Lei Qi;Xingqun Jiang;Yinghuan Shi","doi":"10.1109/TMI.2025.3549433","DOIUrl":"10.1109/TMI.2025.3549433","url":null,"abstract":"We have witnessed remarkable progress in foundation models in vision tasks. Currently, several recent works have utilized the segmenting anything model (SAM) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine-tuning a large amount of pixel-wise annotated medical images following a fully supervised manner. In this paper, to reduce the labeling cost, we investigate a novel weakly-supervised SAM-based segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, a sub-class exploration module is introduced to learn accurate feature representations. 2) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of SAM to obtain an affinity map for random-walk refinement. Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The experimental results on three popularly-used benchmark datasets, i.e., BraTS 2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our proposed WeakMedSAM. Our code is available at <uri>https://github.com/wanghr64/WeakMedSAM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2795-2807"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VesselDiffusion: 3D Vascular Structure Generation Based on Diffusion Model","authors":"Zhanqiang Guo;Zimeng Tan;Jianjiang Feng;Jie Zhou","doi":"10.1109/TMI.2025.3568602","DOIUrl":"10.1109/TMI.2025.3568602","url":null,"abstract":"3D vascular structure models are pivotal in disease diagnosis, surgical planning, and medical education. The intricate nature of the vascular system presents significant challenges in generating accurate vascular structures. Constrained by the complex connectivity of the overall vascular structure, existing methods primarily focus on generating local or individual vessels. In this paper, we introduce a novel two-stage framework termed VesselDiffusion for the generation of detailed vascular structures, which is more valuable for medical analysis. Given that training data for specific vascular structure is often limited, direct generation of 3D data often results in inadequate detail and insufficient diversity. To this end, we initially train a 2D vascular generation model utilizing extensively available generic 2D vascular datasets. Taking the generated 2D images as input, a conditional diffusion model, integrating a dual-stream feature extraction (DSFE) module, is proposed to extrapolate 3D vascular systems. The DSFE module, comprising a Vision Transformer and a Graph Convolutional Network, synergistically captures visual features of global connection rationality and structural features of local vascular details, ensuring the authenticity and diversity of the generated 3D data. To the best of our knowledge, VesselDiffusion is the first model designed for generating comprehensive and realistic vascular networks with diffusion process. Comparative analyses with other generation methodologies demonstrate that the proposed framework achieves superior accuracy and diversity. Our code is available at: <monospace><uri>https://github.com/gzq17/VesselDiffusion</uri></monospace>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3845-3857"},"PeriodicalIF":0.0,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143930937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzhu He;Chunfeng Lian;Ruyi Xiao;Fangmao Ju;Chao Zou;Zongben Xu;Jianhua Ma
{"title":"Anatomy-Aware Deep Unrolling for Task-Oriented Acceleration of Multi-Contrast MRI","authors":"Yuzhu He;Chunfeng Lian;Ruyi Xiao;Fangmao Ju;Chao Zou;Zongben Xu;Jianhua Ma","doi":"10.1109/TMI.2025.3568157","DOIUrl":"10.1109/TMI.2025.3568157","url":null,"abstract":"Multi-contrast magnetic resonance imaging (MC-MRI) plays a crucial role in clinical practice. However, its performance is hindered by long scanning times and the isolation between image acquisition and downstream clinical diagnoses/treatments. Despite the activated research on accelerated MC-MRI, few existing studies prioritize personalized imaging tailored to individual patient characteristics and clinical needs. That is, the current approach often aims to enhance overall image quality, disregarding the specific pathologies or anatomical regions that are of particular interest to clinicians. To tackle this challenge, we propose an anatomy-aware unrolling-based deep network, dubbed as <inline-formula> <tex-math>$text {A}^{{2}}$ </tex-math></inline-formula> MC-MRI, offering promising interpretability and learning capacity for fast MC-MRI catering to downstream clinical needs. The network is unfolded from the iterative algorithm designed for a task-oriented MC-MRI reconstruction model. Specifically, to enhance concurrent MC-MRI of specific targets of interest (TOIs), the model integrates a learnable group sparsity with an anatomy-aware denoising prior. Within the anatomy-aware denoising prior, a segmentation network is involved to provide critical location information for TOI-enhanced denoising. Finally, such an unrolled network is jointly learned with k-space sampling patterns for task-oriented MC-MR reconstruction. Comprehensive evaluations on two public benchmarks as well as an in-house dataset demonstrate that our <inline-formula> <tex-math>${A}^{{2}}$ </tex-math></inline-formula> MC-MRI led to state-of-the-art performance in MC-MRI reconstruction under high acceleration rates, featuring notable enhancements in TOI imaging quality. The code will be available at <uri>https://github.com/ladderlab-xjtu/A2MC-MRI</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3832-3844"},"PeriodicalIF":0.0,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143926380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MSDUNet: A Model Based on Feature Multi-Scale and Dual-Input Dynamic Enhancement for Skin Lesion Segmentation","authors":"Xiaosen Li;Linli Li;Xinlong Xing;Huixian Liao;Wenji Wang;Qiutong Dong;Xiao Qin;Chang’an Yuan","doi":"10.1109/TMI.2025.3549011","DOIUrl":"10.1109/TMI.2025.3549011","url":null,"abstract":"Melanoma is a malignant tumor originating from the lesions of skin cells. Medical image segmentation tasks for skin lesion play a crucial role in quantitative analysis. Achieving precise and efficient segmentation remains a significant challenge for medical practitioners. Hence, a skin lesion segmentation model named MSDUNet, which incorporates multi-scale deformable block (MSD Block) and dual-input dynamic enhancement module(D2M), is proposed. Firstly, the model employs a hybrid architecture encoder that better integrates global and local features. Secondly, to better utilize macroscopic and microscopic multiscale information, improvements are made to skip connection and decoder block, introducing D2M and MSD Block. The D2M leverages large kernel dilated convolution to draw out attention bias matrix on the decoder features, supplementing and enhancing the semantic features of the decoder’s lower layers transmitted through skip connection features, thereby compensating semantic gaps. The MSD Block uses channel-wise split and deformable convolutions with varying receptive fields to better extract and integrate multi-scale information while controlling the model’s size, enabling the decoder to focus more on task-relevant regions and edge details. MSDUNet attains outstanding performance with Dice scores of 93.08% and 91.68% on the ISIC-2016 and ISIC-2018 datasets, respectively. Furthermore, experiments on the HAM10000 dataset demonstrate its superior performance with a Dice score of 95.40%. External validation experiments based on the ISIC-2016, ISIC-2018, and HAM10000 experimental weights on the PH2 dataset yield Dice scores of 92.67%, 92.31%, and 93.46%, respectively, showcasing the exceptional generalization capability of MSDUNet. Our code implementation is publicly available at the Github.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2819-2830"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized Personalization for Federated Medical Image Segmentation via Gossip Contrastive Mutual Learning","authors":"Jingyun Chen;Yading Yuan","doi":"10.1109/TMI.2025.3549292","DOIUrl":"10.1109/TMI.2025.3549292","url":null,"abstract":"Federated Learning (FL) presents a promising avenue for collaborative model training among medical centers, facilitating knowledge exchange without compromising data privacy. However, vanilla FL is prone to server failures and rarely achieves optimal performance on all participating sites due to heterogeneous data distributions among them. To overcome these challenges, we propose Gossip Contrastive Mutual Learning (GCML), a unified framework to optimize personalized models in a decentralized environment, where Gossip Protocol is employed for flexible and robust peer-to-peer communication. To make efficient and reliable knowledge exchange in each communication without the global knowledge across all the sites, we introduce deep contrast mutual learning (DCML), a simple yet effective scheme to encourage knowledge transfer between the incoming and local models through collaborative training on local data. By integrating DCML with other efforts to optimize site-specific models by leveraging useful information from peers, we evaluated the performance and efficiency of the proposed method on three publicly available datasets with different segmentation tasks. Our extensive experimental results show that the proposed GCML framework outperformed both centralized and decentralized FL methods with significantly reduced communication overhead, indicating its potential for real-world deployment. Upon the acceptance of manuscript, the code will be available at: <uri>https://github.com/</uri> CUMC-Yuan-Lab/GCML","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2768-2783"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning Approach to 3×4 Mueller Polarimetry for Complete Reconstruction of Diagnostic Polarimetric Images of Biological Tissues","authors":"Sooyong Chae;Tongyu Huang;Omar Rodríguez-Núñez;Théotim Lucas;Jean-Charles Vanel;Jérémy Vizet;Angelo Pierangelo;Gennadii Piavchenko;Tsanislava Genova;Ajmal Ajmal;Jessica C. Ramella-Roman;Alexander Doronin;Hui Ma;Tatiana Novikova","doi":"10.1109/TMI.2025.3567570","DOIUrl":"10.1109/TMI.2025.3567570","url":null,"abstract":"The translation of imaging Mueller polarimetry to clinical practice is often hindered by large footprint and relatively slow acquisition speed of the existing instruments. Using polarization-sensitive camera as a detector may reduce instrument dimensions and allow data streaming at video rate. However, only the first three rows of a complete <inline-formula> <tex-math>${4}times {4}$ </tex-math></inline-formula> Mueller matrix can be measured. To overcome this hurdle we developed a machine learning approach using sequential neural network algorithm for the reconstruction of missing elements of a Mueller matrix from the measured elements of the first three rows. The algorithm was trained and tested on the dataset of polarimetric images of various excised human tissues (uterine cervix, colon, skin, brain) acquired with two different imaging Mueller polarimeters operating in either reflection (wide-field imaging system) or transmission (microscope) configurations at different wavelengths of 550 nm and 385 nm, respectively. Reconstruction performance was evaluated using various error metrics, all of which confirmed low error values. The reconstruction of full images of the fourth row of Mueller matrix with GPU parallelization and increasing batch size took less than 50 milliseconds. It suggests that a machine learning approach with parallel processing of all image pixels combined with the partial Mueller polarimeter operating at video rate can effectively substitute for the complete Mueller polarimeter and produce accurate maps of depolarization, linear retardance and orientation of the optical axis of biological tissues, which can be used for medical diagnosis in clinical settings.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3820-3831"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10988890","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}