Yuhang Yang, Yuan Liao, Qiushi Han, Jiaqin Peng, Liya Huang
{"title":"ASEAF: Attention-SincNet driven EEG-audio fused target speaker extraction network.","authors":"Yuhang Yang, Yuan Liao, Qiushi Han, Jiaqin Peng, Liya Huang","doi":"10.1088/2057-1976/ae6aa0","DOIUrl":"https://doi.org/10.1088/2057-1976/ae6aa0","url":null,"abstract":"<p><p>This study addresses the challenge of selective auditory attention in noisy environments by proposing an EEG-based target speaker extraction model, ASEAF, designed to mimic neural decoding through tailored spatio-temporal feature extraction and cross-modal fusion. The model achieves precise extraction of the target speaker's speech by simultaneously processing EEG and audio signals. ASEAF comprises four modules: an EEG encoder using CNN and self-attention for spatio-temporal features, an audio encoder with SincNet for frequency-aware processing, a dual-path LSTM speaker extractor for fused feature masking, and a CNN decoder for waveform reconstruction. This innovative integration advances neural-signal-based speech reconstruction by providing insights into cross-modal interactions. Experiments on the Cocktail Party dataset, KUL dataset and DTU dataset demonstrate that ASEAF outperforms state-of-the-art models across multiple metrics, with an average improvement of 11.5% in scale-invariant signal-to-distortion ratio (SI-SDRi). This work offers a more effective hearing aid solution for individuals with hearing impairments and advances the field of brain-computer interfaces.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147855714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vitiligo state assessment based on progressive transfer learning and multimodal domain adaptation.","authors":"Chuanhui Wu, Shuying Jiang, Kaiqiao He, Zhiming Li, Shuli Li, Kaiyuan Wang, Junpeng Zhang, Junran Zhang","doi":"10.1088/2057-1976/ae6457","DOIUrl":"10.1088/2057-1976/ae6457","url":null,"abstract":"<p><p>Vitiligo is a common skin depigmentation disorder; assessing its state is crucial for the treatment outcome. Collecting multimodal data for vitiligo assessment is complex and costly in clinical practices, and the limited data size restricts the performance of deep learning models. Transfer learning can alleviate the shortage of training data in medical image recognition, however, its applications in vitiligo state assessment are constrained by feature differences between natural and medical images and insufficient generalization to different modalities. To address these challenges, this paper introduces a vitiligo state assessment method based on progressive transfer learning and multimodal domain adaptation. The scheme uses a large set of unlabeled medical images as a bridge to reduce the discrepancies between natural and medical images through multi-step fine-tuning. An adaptive parameter unfreezing strategy is then applied for the accurately labeled target data to improve the adaptability and accuracy of the model. In addition, an integrated multimodal domain adaptation approach based on CycleGAN and hue, saturation, value color space transformation is proposed to reduce the impact of modality differences on transfer learning. Experimental results demonstrate that compared to the conventional transfer learning method, the proposed method improves the accuracy by 2.4% and 3.6% on the clinical and Wood's lamp modality datasets, respectively. This accurate vitiligo state assessment method can be applied to a wide range of multimodal dermatological diseases where labeled data is limited.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Oral squamous cell carcinoma diagnosis with time and frequency domain features from optical coherence tomography A-scan signals.","authors":"Prashanth Panta, Sarfaraj Mirza, Renu John","doi":"10.1088/2057-1976/ae6aa1","DOIUrl":"https://doi.org/10.1088/2057-1976/ae6aa1","url":null,"abstract":"<p><p>Optical coherence tomography (OCT) is extremely useful in the screening and detection of oral cancers. But various challenges, such as its subjectivity, operator dependence in interpretation, and lack of quantitative outcomes, have been delaying its adoption in clinical decision-making. The objective of this research is to quantify various A-scan features embedded in the OCT signal through advanced signal processing techniques and machine learning algorithms. Ex vivo imaging of oral tissues (normal mucosa, carcinoma in situ, well-differentiated, and poorly differentiated oral squamous cell carcinoma) was conducted using a spectral domain optical coherence tomography system. Our A-scan dataset consisted of representative 1D signals obtained from different regions of the tissue bed. A set of 12 time- and 8 frequency-domain features were computed on each A-scan, and ten machine learning models were evaluated for binary and multi-label classification. LightGBM achieved the highest performance in both binary (accuracy: 0.8847, F1: 0.8878, AUC: 0.9539) and multi-label classification (accuracy: 0.8248, F1: 0.8201, AUC: 0.964). LightGBM was selected as the final model based on its superior and consistent performance across both classification paradigms. Our proof-of-concept feasibility study demonstrates good accuracy for differentiating oral mucosal tissues, highlighting the biomarker signature of A-scans.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147855734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Turning sensors into predictors: the power of slope to anticipate hyper-and hypoglycemia.","authors":"Claire Chou, Joan B Soriano, Sara Lumbreras","doi":"10.1088/2057-1976/ae6348","DOIUrl":"10.1088/2057-1976/ae6348","url":null,"abstract":"<p><p>Continuous glucose monitoring (CGM) is often interpreted using static thresholds, yet glycemic risk is inherently dynamic. Here, we test whether moving from purely level-based evaluation to a simple calculation enriched with the most recent glucose slope (rate of change) produces meaningful and interpretable gains in predicting near-future glycemic trajectories. We analyzed CGM recordings from 16 adults in the BIG IDEAs Lab Glycemic Variability, and wearable device dataset to (i) forecast future glucose values and (ii) classify impending hyperglycemia (⩾180 mg dl<sup>-1</sup>) and hypoglycemia (⩽70 mg dl<sup>-1</sup>). Using multiple historical window lengths and prediction horizons, we trained regression models that emphasize compact, physiologically grounded predictors-particularly current glucose and temporal-slope features. For short prediction horizons (<60 min), performance was extremely high (accuracy >99.5%, recall >97%), with predictions driven primarily by current glucose level and immediate slope. As horizons increased, performance declined gradually but remained strong, with models increasingly drawing on earlier glucose values and slope patterns consistent with diurnal structure. Across all scenarios, slope vectors consistently ranked among the most informative predictors. Overall, these results show that glycemic dynamics and risk can be predicted accurately using a small, interpretable feature set that explicitly incorporates biomarker velocity. This empirically supports the clinical relevance of glucose rate-of-change and motivates the integration of slope-based analytics into wearable decision-support for real-time monitoring.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miriam Schwarze, Hui Khee Looe, Björn Poppe, Leo Thomas, Hans Rabus
{"title":"Cluster dose prediction in carbon ion therapy: using transfer learning from a pretrained dose prediction U-Net-a proof of concept.","authors":"Miriam Schwarze, Hui Khee Looe, Björn Poppe, Leo Thomas, Hans Rabus","doi":"10.1088/2057-1976/ae63d5","DOIUrl":"10.1088/2057-1976/ae63d5","url":null,"abstract":"<p><p>The cluster dose concept offers an alternative to the radiobiological effectiveness-based model for describing radiation-induced biological effects. This study examines the application of a neural network to predict cluster dose distributions, with the goal of replacing the computationally intensive simulations currently required. Cluster dose distributions are predicted using a U-Net that was initially pretrained on conventional dose distributions. Using transfer learning techniques, the decoder path is adapted for cluster dose estimation. Both the training and pretraining datasets include head and neck regions from multiple patients and carbon ion beams of varying energies and positions. Monte Carlo simulations were used to generate the ground truth cluster dose distributions. The U-Net enables cluster dose estimation for a single pencil beam within milliseconds using a graphics processing unit. The predicted cluster dose distributions deviate from the ground truth by less than 0.35%. This proof-of-principle study demonstrates the feasibility of accurately estimating cluster doses within clinically acceptable computation times using machine learning. By leveraging a pretrained neural network and applying transfer learning techniques, the approach significantly reduces the need for large-scale, computationally expensive training data.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hsiang-Chi Kuo, Seng Boh Lim, Usman Mahmood, Michael Cebisch, Tamas Paul, Assen S Kirov, Cesar Della Biancia, Sean Berry, Xiang Li, Jean Moran, Laura I Cerviño
{"title":"A strategy of CT exam protocol to standardize groups of scanners using automated noise assessment and noise prediction for CT radiotherapy simulation.","authors":"Hsiang-Chi Kuo, Seng Boh Lim, Usman Mahmood, Michael Cebisch, Tamas Paul, Assen S Kirov, Cesar Della Biancia, Sean Berry, Xiang Li, Jean Moran, Laura I Cerviño","doi":"10.1088/2057-1976/ae6273","DOIUrl":"10.1088/2057-1976/ae6273","url":null,"abstract":"<p><p><i>Objective.</i>Standardizing clinical CT images enables consistent analysis of image data for target delineation, radiomics, and machine learning in personalized medicine. The process of managing the scanned protocols in various CT scanners for radiotherapy simulation is underexplored in the literature. This study uses noise evaluation and prediction models to harmonize CT protocols across scanner models and manufacturers, ensuring reliable data for radiotherapy planning.<i>Approach.</i>A global noise index (GNI) was calculated from 1581 clinical CT exams obtained on five scanners (three from vendor<i>P</i>and two from vendor<i>S: S</i><sub>p</sub>and<i>S</i><sub>c</sub>). Exams were categorized by anatomical site. GNI was assessed (I) within the same model, (II) between models from the same manufacturer, and (III) across manufacturers. One-way ANOVA (I, III) and student<i>t</i>-tests (II) evaluated significance (<i>p</i>< 0.05). Predictive models were created and validated with 90 further exams, establishing a reference GNI (GNI<sub>ref</sub>) for future optimization.<i>Results.</i>GNI showed minor variations among P-type and between<i>S</i><sub>p</sub>and<i>S</i><sub>c</sub>scanners, but<i>S</i>scanners differed from<i>P</i>. Predictive model error ranged from 0.8 to 1.5 hounsfield units (HU). GNI differences between<i>S</i>and<i>P</i>scanners were <1 HU for head, neck, and paraspinal protocols, but<i>S</i>scanners had 1.5-2 HU higher GNI for the abdomen, pelvis, breast, and lungs.<i>Conclusion.</i>Scanners of the same model show slight variation; minor noise differences exist between manufacturers. Predictive modeling can estimate CT noise and support protocol optimization. The reference GNI of an anatomical site can be derived from sufficient CT exams, with or without the predictive mode; a 1-2 HU difference in GNI<sub>ref</sub>is achievable if the protocol is properly translated.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiuming Bao, Xiang Zhang, Xinyi Wang, Tao Shen, Shunfang Wang
{"title":"SPMFE-UNet: shape perception and multi-scale features enhancement UNet for robust abdominal organ and skin lesion segmentation.","authors":"Jiuming Bao, Xiang Zhang, Xinyi Wang, Tao Shen, Shunfang Wang","doi":"10.1088/2057-1976/ae63d7","DOIUrl":"https://doi.org/10.1088/2057-1976/ae63d7","url":null,"abstract":"<p><p>Convolutional neural networks demonstrate strong performance in medical image segmentation but face clinically significant challenges due to the morphological diversity of anatomical targets-including substantial variations in shape, scale, and position. To overcome these limitations, we propose shape perception and multi-scale features enhancement UNet, a novel architecture designed to jointly learn discriminative geometric features (shapes and scales) for robust target perception. The proposed framework designs two synergistic core modules to address the challenges of anatomical shape and scale variability: a shape perception module (SPM) that employs a dynamic gating mechanisms to adaptively sharpen crucial contour features and suppress irrelevant background interference, and a multi-scale features enhancement module (MFEM) which leverages a parallel multi-branch convolutional architecture with varied receptive fields to capture and intelligently fuse hierarchical patterns, from local textures to global semantics. These co-optimized modules form an integrated feature learning pipeline where the SPM purifies shape-related features and the MFEM empowers them with rich contextual information, enabling joint geometric perception for robust and accurate segmentation across heterogeneous clinical imaging scenarios. Experiments demonstrate competitive performance across three datasets: on Synapse, our method achieves a Dice score of 84.67% (surpassing CCViM by 2.02%) and an HD95 of 16.37 mm; on ISIC-2017, it attains a Dice of 92.36% (outperforming EMCAD by 1.15%) and an mean Intersection over Union (mIoU) of 86.93%; and on ISIC-2018, it reaches a Dice of 90.81% (exceeding CCViM by 0.75%) and an mIoU of 84.31%. Our method effectively mitigates mis-segmentation artifacts stemming from scale mismatches and shape irregularities, ultimately delivering superior robustness and accuracy in complex clinical imaging scenarios.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147833070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Brain tumors classification using electrical bioimpedance spectroscopy based on a multi-scale feature extraction network with frequency band attention mechanism.","authors":"Jing Guo, Yuqin Zhong, Jiaxin Lu, Xiaobing Jiang, Qinglin Zheng, Zhuoqi Cheng, Depei Li","doi":"10.1088/2057-1976/ae5f9e","DOIUrl":"10.1088/2057-1976/ae5f9e","url":null,"abstract":"<p><p>Electrical bioimpedance (EBI) measurement provides insights into the biophysical properties of tissues, offering valuable information for tumor diagnosis and classification. Deep learning has demonstrated distinct advantages in analyzing complex biomedical data. However, their applications in the rapid diagnosis of brain tumors had not been fully explored. In this study, 52 brain tumor samples were collected for EBI measurement. A deep learning framework that integrates multi-scale (MS) impedance feature extraction with frequency band attention was developed for the analysis of bioimpedance spectra (1-349 kHz) and automatic tumor classification. The model used parallel convolutional kernels (sizes 1, 3, 5, 7, 9) to capture local and global features, alongside an attention module to prioritize diagnostic frequency bands. Model performance was evaluated using precision, sensitivity, specificity and<i>F</i>1-score. Significant differences in impedance values were observed among gliomas, meningiomas, and metastases. The proposed model exhibits high sensitivity and precision in tumor classification tasks, achieving<i>F</i>1-scores of 91.54% (gliomas vs meningiomas vs metastases), 99.61% (glioma vs metastasis), 93.12% (lower-grade gliomas vs glioblastomas), and 98.75% (1p/19q codeleted vs non-codeleted gliomas), with significant conductivity differences (<i>p</i>< 0.05) between tumor types. In summary, the proposed framework, which integrates MS features and adaptive frequency, improves the performance of EBI-based tumor classification, and shows promise as an accurate intraoperative tool for the rapid diagnosis of brain tumors.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147687928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Majid Behzadpour, Ebrahim Azizi, Bengie L Ortiz, Kai Wu
{"title":"Enhanced ResU-Net for brain tumor segmentation using EfficientNetB0, channel attention, and ASPP.","authors":"Majid Behzadpour, Ebrahim Azizi, Bengie L Ortiz, Kai Wu","doi":"10.1088/2057-1976/ae6459","DOIUrl":"10.1088/2057-1976/ae6459","url":null,"abstract":"<p><p>Accurate and efficient segmentation of brain tumors is critical for diagnosis, treatment planning, and monitoring in clinical practice. In this study, we present an enhanced ResU-Net architecture for automatic brain tumor segmentation, integrating an EfficientNetB0 encoder, a channel attention mechanism, and an atrous spatial pyramid pooling (ASPP) module. The EfficientNetB0 encoder leverages pre-trained features to improve feature extraction efficiency, while the channel attention mechanism enhances the model's focus on tumor-relevant features. ASPP enables multi-scale contextual learning, which is crucial for handling tumors of varying sizes and shapes. The proposed model was evaluated on two benchmark datasets: The Cancer Genome Atlas Low Grade Glioma and brain tumor segmentation (BraTS-2020). Experimental results demonstrate that our method consistently outperforms the baseline ResU-Net and its EfficientNet variant, achieving dice similarity coefficient of 0.903 and 0.851, and HD95 scores of 9.43 and 3.54 for whole tumor and tumor core (TC) regions on the BraTS 2020 dataset, respectively. Compared to state-of-the-art methods, our approach shows competitive performance, particularly in whole tumor and TC segmentation. These results indicate that combining a powerful encoder with attention mechanisms and ASPP can significantly enhance BraTS performance. The proposed approach holds promise for further optimization and application in other medical image segmentation tasks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13139756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Yao, Simin Li, Yangsheng Hu, Yuanchao Xue, Jie Huang, Jianfeng He, Siming Li
{"title":"SAM2HIPT: a hybrid deep learning framework integrating SAM2 and HIPT with joint loss optimization for immunohistochemical cell nucleus segmentation.","authors":"Yu Yao, Simin Li, Yangsheng Hu, Yuanchao Xue, Jie Huang, Jianfeng He, Siming Li","doi":"10.1088/2057-1976/ae63d6","DOIUrl":"10.1088/2057-1976/ae63d6","url":null,"abstract":"<p><p>Nucleus segmentation in immunohistochemistry images plays a critical role in cancer diagnosis and treatment assessment. However, existing methods remain limited in segmentation accuracy and boundary delineation due to staining heterogeneity, densely packed cell distributions, and complex background interference. To address these challenges, this paper proposes a two-stage nucleus segmentation framework, termed SAM2HIPT. In the first stage, the pre-trained Segment Anything Model 2 (SAM2) is employed to generate initial segmentation predictions for input images, wherein the image encoder is kept frozen to preserve the pre-trained visual representation capacity while the mask decoder is fine-tuned to adapt to the characteristics of the pathological image domain; local texture, morphological, and boundary information are extracted through visual feature encoding to produce initial nucleus segmentation masks and spatial prior representations. In the second stage, the Hierarchical Image Pyramid Transformer is introduced to refine the initial segmentation results, performing multi-scale, multi-level feature representation and fusion of morphological, textural, and spatial structural information through a hierarchical vision Transformer architecture, thereby enhancing nuclear structural representation and boundary consistency. To enable collaborative optimization across both stages, a joint loss function is designed to impose unified constraints on segmentation accuracy and feature representation. Evaluated on two public histopathological benchmark datasets, BCData and DeepLIIF, the proposed method achieves Dice coefficients of 0.92 and 0.91, respectively, and HD95 boundary error values of 1.05 pixels and 1.10 pixels, demonstrating superior segmentation performance and robustness over multiple state-of-the-art baseline methods.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}