Jun Yang, Lingyu Huang, HanLiang Du, Lei Zhang, Ben Q Li, Mutian Xu
{"title":"Reconstruction of local three-dimensional temperature field of tumor cells with low-toxic nanoscale quantum-dot thermometer and cepstrum spatial localization algorithm.","authors":"Jun Yang, Lingyu Huang, HanLiang Du, Lei Zhang, Ben Q Li, Mutian Xu","doi":"10.1088/2057-1976/ada9ee","DOIUrl":"10.1088/2057-1976/ada9ee","url":null,"abstract":"<p><p>The optimal method for three-dimensional thermal imaging within cells involves collecting intracellular temperature responses while simultaneously obtaining corresponding 3D positional information. Current temperature measurement techniques based on the photothermal properties of quantum dots face several limitations, including high cytotoxicity and low fluorescence quantum yields. These issues affect the normal metabolic processes of tumor cells. This study synthesizes a low-toxicity cell membrane-targeted quantum dot temperature sensor by optimizing the synthesis method of CdTe/CdS/ZnS core-shell structured quantum dots. Compared to CdTe-targeted quantum dot temperature sensors, the cytotoxicity of CdTe/CdS/ZnS-targeted quantum dot temperature sensors is reduced by 40.79%. Additionally, a novel cepstrum-based spatial localization algorithm is proposed to achieve rapidly compute the three-dimensional positions of densely distributed quantum dot temperature sensors. Ultimately, both targeted and non-targeted CdTe/CdS/ZnS quantum dot temperature sensors were used simultaneously to label the internal and external regions of human osteosarcoma cells to obtain temperature data at these labeling positions. By combining this with the cepstrum-based spatial localization algorithm, the spatial coordinates of the quantum dot temperature sensors were obtained. Three-dimensional temperature field reconstruction of three local regions was achieved within a 12 μm axial range in living cells. The method described in this paper can be widely applied to the quantitative study of intracellular thermal responses.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142982479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GradeDiff-IM: an ensembles model-based grade classification of breast cancer.","authors":"Sweta Manna, Sujoy Mistry, Keshav Dahal","doi":"10.1088/2057-1976/ada8ae","DOIUrl":"10.1088/2057-1976/ada8ae","url":null,"abstract":"<p><p>Cancer grade classification is a challenging task identified from the cell structure of healthy and abnormal tissues. The practitioners learns about the malignant cell through the grading and plans the treatment strategy accordingly. A major portion of researchers used DL models for grade classification. However, the behavior of DL models is hidden type, it is unknown which features contribute to the accuracy and how the features are chosen for grading. To address the issue the study proposes a Grade Differentiation Integrated Model (GradeDiff-IM) to classify the grades G1, G2, and G3. In GradeDiff-IM, different ML models, are used for grade classification from clinical and pathological reports. The biological-significant features with ranking technique prioritize influential features are used to identify grades G. Subsequently, histopathological images are used by DL models for grade classification and compared with ML models. Instead of employing a single ML model, the GradeDiff-IM model uses the stack-ensembled approach to improve the grade G classification performance. The maximum accuracy is attained by stacking G1-98.2, G2-97.6, and G3-97.5. The proposed study shows that the ML ensemble model is more accurate than the DL models. As a result, the proposed model achieved higher accuracy for G by implementing the stacking technique than the other state-of-the-art models.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142962164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muwei Jian, Wenjing Xu, ChangQun Nie, Shuo Li, Songwen Yang, Xiaoguang Li
{"title":"DAU-Net: a novel U-Net with dual attention for retinal vessel segmentation.","authors":"Muwei Jian, Wenjing Xu, ChangQun Nie, Shuo Li, Songwen Yang, Xiaoguang Li","doi":"10.1088/2057-1976/ada9f0","DOIUrl":"https://doi.org/10.1088/2057-1976/ada9f0","url":null,"abstract":"<p><p>In fundus images, precisely segmenting retinal blood vessels is important for diagnosing eye-related conditions, such as diabetic retinopathy and hypertensive retinopathy or other eye-related disorders. In this work, we propose an enhanced U-shaped network with dual-attention, named DAU-Net, divided into encoder and decoder parts. Wherein, we replace the traditional convolutional layers with ConvNeXt Block and SnakeConv Block to strengthen its recognition ability for different forms of blood vessels while lightweight the model. Additionally, we designed two efficient attention modules, namely Local-Global Attention (LGA) and Cross-Fusion Attention (CFA). Specifically, LGA conducts attention calculations on the features extracted by the encoder to accentuate vessel-related characteristics while suppressing irrelevant background information; CFA addresses potential information loss during feature extraction by globally modeling pixel interactions between encoder and decoder features. Comprehensive experiments in terms of public datasets DRIVE, CHASE_DB1, and STARE demonstrate that DAU-Net obtains excellent segmentation results on all three datasets. The results show an AUC of 0.9818, ACC of 0.8299, and F1 score of 0.9585 on DRIVE; 0.9894, 0.8499, and 0.9700 on CHASE_DB1; and 0.9908, 0.8620, and 0.9712 on STARE, respectively. These results strongly demonstrate the effectiveness of DAU-Net in retinal vessel segmentation, highlighting its potential for practical clinical use.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"11 2","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143021732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fletcher Barrett, Sarah Quirk, Kailyn Stenhouse, Karen Long, Michael Roumeliotis, Sangjune Lee, Roberto Souza, Philip McGeachy
{"title":"Development of a machine learning tool to predict deep inspiration breath hold requirement for locoregional right-sided breast radiation therapy patients.","authors":"Fletcher Barrett, Sarah Quirk, Kailyn Stenhouse, Karen Long, Michael Roumeliotis, Sangjune Lee, Roberto Souza, Philip McGeachy","doi":"10.1088/2057-1976/ad9b30","DOIUrl":"10.1088/2057-1976/ad9b30","url":null,"abstract":"<p><p><i>Background and purpose</i>. This study presents machine learning (ML) models that predict if deep inspiration breath hold (DIBH) is needed based on lung dose in right-sided breast cancer patients during the initial computed tomography (CT) appointment.<i>Materials and methods</i>. Anatomic distances were extracted from a single-institution dataset of free breathing (FB) CT scans from locoregional right-sided breast cancer patients. Models were developed using combinations of anatomic distances and ML classification algorithms (gradient boosting, k-nearest neighbors, logistic regression, random forest, and support vector machine) and optimized over 100 iterations using stratified 5-fold cross-validation. Models were grouped by the number of anatomic distances used during development; those with the highest validation accuracy were selected as final models. Final models were compared based on their predictive ability, measurement collection efficiency, and robustness to simulated user error during measurement collection.<i>Results</i>. This retrospective study included 238 patients treated between 2016 and 2021. Model development ended once eight anatomic distances were included, and the validation accuracy plateaued. The best performing model used logistic regression with four anatomic distances achieving 80.5% average testing accuracy, with minimal false negatives and positives (<27%). The anatomic distances required for prediction were collected within 3 min and were robust to simulated user error during measurement collection, changing accuracy by <5%.<i>Conclusion</i>. Our logistic regression model using four anatomic distances provided the best balance between efficiency, robustness, and ability to predict if DIBH was needed for locoregional right-sided breast cancer patients.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oscar Jalnefjord, Nicolas Geades, Guillaume Gilbert, Isabella M Björkman-Burtscher, Maria Ljungberg
{"title":"Nyquist ghost elimination for diffusion MRI by dual-polarity readout at low b-values.","authors":"Oscar Jalnefjord, Nicolas Geades, Guillaume Gilbert, Isabella M Björkman-Burtscher, Maria Ljungberg","doi":"10.1088/2057-1976/ada8b0","DOIUrl":"10.1088/2057-1976/ada8b0","url":null,"abstract":"<p><p>Dual-polarity readout is a simple and robust way to mitigate Nyquist ghosting in diffusion-weighted echo-planar imaging but imposes doubled scan time. We here propose how dual-polarity readout can be implemented with little or no increase in scan time by exploiting an observed b-value dependence and signal averaging. The b-value dependence was confirmed in healthy volunteers with distinct ghosting at low b-values but of negligible magnitude at<i>b</i>= 1000 s/mm<sup>2</sup>. The usefulness of the suggested strategy was exemplified with a scan using tensor-valued diffusion encoding for estimation of parameter maps of mean diffusivity, and anisotropic and isotropic mean kurtosis, showing that ghosting propagated into all three parameter maps unless dual-polarity readout was applied. Results thus imply that extending the use of dual-polarity readout to low non-zero b-values provides effective ghost elimination and can be used without increased scan time for any diffusion MRI scan containing signal averaging at low b-values.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142962166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Fumero, Jose Sigut, José Estévez, Tinguaro Díaz-Alemán
{"title":"Systematic application of saliency maps to explain the decisions of convolutional neural networks for glaucoma diagnosis based on disc and cup geometry.","authors":"Francisco Fumero, Jose Sigut, José Estévez, Tinguaro Díaz-Alemán","doi":"10.1088/2057-1976/ada8ad","DOIUrl":"10.1088/2057-1976/ada8ad","url":null,"abstract":"<p><p>This paper systematically evaluates saliency methods as explainability tools for convolutional neural networks trained to diagnose glaucoma using simplified eye fundus images that contain only disc and cup outlines. These simplified images, a methodological novelty, were used to relate features highlighted in the saliency maps to the geometrical clues that experts consider in glaucoma diagnosis. Despite their simplicity, these images retained sufficient information for accurate classification, with balanced accuracies ranging from 0.8331 to 0.8890, compared to 0.8090 to 0.9203 for networks trained on the original images. The study used a dataset of 606 images, along with RIM-ONE DL and REFUGE datasets, and explored nine saliency methods. A discretization algorithm was applied to reduce noise and compute normalized attribution values for standard eye fundus sectors. Consistent with other medical imaging studies, significant variability was found in the attribution maps, influenced by the method, model, or architecture, and often deviating from typical sectors experts examine. However, globally, the results were relatively stable, with a strong correlation of 0.9289 (<i>p</i> < 0.001) between relevant sectors in our dataset and RIM-ONE DL, and 0.7806 (<i>p</i> < 0.001) for REFUGE. The findings suggest caution when using saliency methods in critical fields like medicine. These methods may be more suitable for broad image relevance interpretation rather than assessing individual cases, where results are highly sensitive to methodological choices. Moreover, the regions identified by the networks do not consistently align with established medical criteria for disease severity.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142962167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S A Yoganathan, Tarraf Torfeh, Satheesh Paloor, Rabih Hammoud, Noora Al-Hammadi, Rui Zhang
{"title":"Automatic segmentation of MRI images for brain radiotherapy planning using deep ensemble learning.","authors":"S A Yoganathan, Tarraf Torfeh, Satheesh Paloor, Rabih Hammoud, Noora Al-Hammadi, Rui Zhang","doi":"10.1088/2057-1976/ada6ba","DOIUrl":"https://doi.org/10.1088/2057-1976/ada6ba","url":null,"abstract":"<p><p><i>Background</i><i>and Purpose</i><b>:</b>This study aimed to develop and evaluate an efficient method to automatically segment T1- and T2-weighted brain magnetic resonance imaging (MRI) images. We specifically compared the segmentation performance of individual convolutional neural network (CNN) models against an ensemble approach to advance the accuracy of MRI-guided radiotherapy (RT) planning.<i>Materials and Methods</i>. The evaluation was conducted on a private clinical dataset and a publicly available dataset (HaN-Seg). Anonymized MRI data from 55 brain cancer patients, including T1-weighted, T1-weighted with contrast, and T2-weighted images, were used in the clinical dataset. We employed an EDL strategy that integrated five independently trained 2D neural networks, each tailored for precise segmentation of tumors and organs at risk (OARs) in the MRI scans. Class probabilities were obtained by averaging the final layer activations (Softmax outputs) from the five networks using a weighted-average method, which were then converted into discrete labels. Segmentation performance was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance at 95% (HD95). The EDL model was also tested on the HaN-Seg public dataset for comparison.<i>Results</i>. The EDL model demonstrated superior segmentation performance on both the clinical and public datasets. For the clinical dataset, the ensemble approach achieved an average DSC of 0.7 ± 0.2 and HD95 of 4.5 ± 2.5 mm across all segmentations, significantly outperforming individual networks which yielded DSC values ≤0.6 and HD95 values ≥14 mm. Similar improvements were observed in the HaN-Seg public dataset.<i>Conclusions</i>. Our study shows that the EDL model consistently outperforms individual CNN networks in both clinical and public datasets, demonstrating the potential of ensemble learning to enhance segmentation accuracy. These findings underscore the value of the EDL approach for clinical applications, particularly in MRI-guided RT planning.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"11 2","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142999526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced AMD detection in OCT images using GLCM texture features with Machine Learning and CNN methods.","authors":"Loganathan R, Latha S","doi":"10.1088/2057-1976/ada6bc","DOIUrl":"10.1088/2057-1976/ada6bc","url":null,"abstract":"<p><p>Global blindness is substantially influenced by age-related macular degeneration (AMD). It significantly shortens people's lives and severely impairs their visual acuity. AMD is becoming more common, requiring improved diagnostic and prognostic methods. Treatment efficacy and patient survival rates stand to benefit greatly from these upgrades. To improve AMD diagnosis in preprocessed retinal images, this study uses Grey Level Co-occurrence Matrix (GLCM) features for texture analysis. The selected GLCM features include contrast and dissimilarity. Notably, grayscale pixel values were also integrated into the analysis. Key factors such as contrast, correlation, energy, and homogeneity were identified as the primary focuses of the study. Various supervised machine learning (ML) and CNN techniques were employed on Optical Coherence Tomography (OCT) image datasets. The impact of feature selection on model performance is evaluated by comparing all GLCM features, selected GLCM features, and grayscale pixel features. Models using GSF features showed low accuracy, with OCTID at 23% and Kermany at 54% for BC, and 23% and 53% for CNN. In contrast, GLCM features achieved 98% for OCTID and 73% for Kermany in RF, and 83% and 77% in CNN. SFGLCM features performed the best, achieving 98% for OCTID across both RF and CNN, and 77% for Kermany. Overall, SFGLCM and GLCM features outperformed GSF, improving accuracy, generalization, and reducing overfitting for AMD detection. The Python-based research demonstrates ML's potential in ophthalmology to enhance patient outcomes.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Novel approach for quality control testing of medical displays using deep learning technology.","authors":"Sho Maruyama, Fumiya Mizutani, Haruyuki Watanabe","doi":"10.1088/2057-1976/ada6bd","DOIUrl":"10.1088/2057-1976/ada6bd","url":null,"abstract":"<p><p><i>Objectives:</i>In digital image diagnosis using medical displays, it is crucial to rigorously manage display devices to ensure appropriate image quality and diagnostic safety. The aim of this study was to develop a model for the efficient quality control (QC) of medical displays, specifically addressing the measurement items of contrast response and maximum luminance as part of constancy testing, and to evaluate its performance. In addition, the study focused on whether these tasks could be addressed using a multitasking strategy.<i>Methods:</i>The model used in this study was constructed by fine-tuning a pretrained model and expanding it to a multioutput configuration that could perform both contrast response classification and maximum luminance regression. QC images displayed on a medical display were captured using a smartphone, and these images served as the input for the model. The performance was evaluated using the area under the receiver operating characteristic curve (AUC) for the classification task. For the regression task, correlation coefficients and Bland-Altman analysis were applied. We investigated the impact of different architectures and verified the performance of multi-task models against single-task models as a baseline.<i>Results:</i>Overall, the classification task achieved a high AUC of approximately 0.9. The correlation coefficients for the regression tasks ranged between 0.6 and 0.7 on average. Although the model tended to underestimate the maximum luminance values, the error margin was consistently within 5% for all conditions.<i>Conclusion:</i>These results demonstrate the feasibility of implementing an efficient QC system for medical displays and the usefulness of a multitask-based method. Thus, this study provides valuable insights into the potential to reduce the workload associated with medical-device management the development of QC systems for medical devices, highlighting the importance of future efforts to improve their accuracy and applicability.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dagbjört Helga Eiríksdóttir, Gry Grønborg Hvass, Henrik Zimmermann, Johannes Jan Struijk, Samuel Emil Schmidt
{"title":"Noise reduction in abdominal acoustic recordings of maternal placental murmurs.","authors":"Dagbjört Helga Eiríksdóttir, Gry Grønborg Hvass, Henrik Zimmermann, Johannes Jan Struijk, Samuel Emil Schmidt","doi":"10.1088/2057-1976/ada6bb","DOIUrl":"10.1088/2057-1976/ada6bb","url":null,"abstract":"<p><p>Fetal phonocardiography is a well-known auscultation technique for evaluation of fetal health. However, murmurs that are synchronous with the maternal heartbeat can often be heard while listening to fetal heart sounds. Maternal placental murmurs (MPM) could be used to detect maternal cardiovascular and placental abnormalities, but the recorded MPMs are often contaminated by ambient interference and noise.<i>Objective:</i>The aim of this study was to compare noise reduction methods to reduce noise in the recorded MPMs<i>. Approach:</i>1) Bandpass filtering (BPF), 2) a multichannel noise reduction (MCh) using either Wiener filter (WF), Least-mean-square or Independent component analysis, 3) a combination of BPF with wavelet transient reduction (WTR) and 4) a combination of MCh and WTR. The methods were tested on signals recorded with two microphone units placed on the abdomen of pregnant women with an electrocardiogram (ECG) recorded simultaneously. The performance was evaluated using coherence and heart cycle duration error (HCD<sub>Error</sub>) as compared with the ECG. R<i>esults</i>: The mean of the absolute HCD<sub>Error</sub>was 32.7 ms for the BPF with all methods significantly lower (p < 0.05) than BPF. The lowest errors were obtained for WTR-WF where the HCD<sub>Error</sub>ranged 16.68-17.72 ms for seven different filter orders. All methods had significantly different coherence measure compared with BPF (p < 0.05). The lowest coherence was reached with WTR-WF (filter order 640) where the mean value decreased from 0.50 for BPF to 0.03.<i>Significance:</i>These results show how noise reduction techniques such as WF combined with wavelet denoising can greatly enhance the quality of MPM recordings.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}