{"title":"HiEndo: harnessing large-scale data for generating high-resolution laparoscopy videos under a two-stage framework.","authors":"Zhao Wang, Yeqian Zhang, Jiayi Gu, Yueyao Chen, Yonghao Long, Xiang Xia, Puhua Zhang, Chunchao Zhu, Zerui Wang, Qi Dou, Zheng Wang, Zizhen Zhang","doi":"10.1080/24699322.2025.2536643","DOIUrl":"https://doi.org/10.1080/24699322.2025.2536643","url":null,"abstract":"<p><p>Recent success in generative AI has demonstrated great potential in various medical scenarios. However, how to generate realistic and high-fidelity gastrointestinal laparoscopy videos still lacks exploration. A recent work, Endora, proposes a basic generation model for a gastrointestinal laparoscopy scenario, producing low-resolution laparoscopy videos, which can not meet the real needs in robotic surgery. Regarding this issue, we propose an innovative two-stage video generation architecture HiEndo for generating high-resolution gastrointestinal laparoscopy videos with high fidelity. In the first stage, we build a diffusion transformer for generating a low-resolution laparoscopy video upon the basic capability of Endora as an initial start. In the second stage, we further design a super resolution module to improve the resolution of initial video and refine the fine-grained details. With these two stages, we could obtain high-resolution realistic laparoscopy videos with high fidelity, which can meet the real-world clinical usage. We also collect a large-scale gastrointestinal laparoscopy video dataset with 61,270 video clips for training and validation of our proposed method. Extensive experimental results have demonstrate the effectiveness of our proposed framework. For example, our model achieves 15.1% Fréchet Video Distance and 3.7% F1 score improvements compared with the previous state-of-the-art method.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2536643"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144715233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-01-22DOI: 10.1080/24699322.2025.2456303
Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang
{"title":"Three-dimensional (3D)-printed custom-made titanium ribs for chest wall reconstruction post-desmoid fibromatosis resection.","authors":"Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang","doi":"10.1080/24699322.2025.2456303","DOIUrl":"10.1080/24699322.2025.2456303","url":null,"abstract":"<p><p>Desmoid fibromatosis (DF) is a rare low-grade benign myofibroblastic neoplasm that originates from fascia and muscle striae. For giant chest wall DF, surgical resection offer a radical form of treatment and the causing defects usually need repair and reconstruction, which can restore the structural integrity and rigidity of the thoracic cage. The past decade witnessed rapid advances in the application of various prosthetic material in thoracic surgery. However, three-dimensional (3D)-printed custom-made titanium ribs have never been reported for chest wall reconstruction post-DF resection. Here, we report a successful implantation of individualized 3D-printed titanium ribs to repair the chest wall defect in a patient with DF.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2456303"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-05-31DOI: 10.1080/24699322.2025.2508144
Ali Bahari Malayeri, Matthias Seibold, Nicola A Cavalcanti, Jonas Hein, Sascha Jecklin, Lazaros Vlachopoulos, Sandro Fucentese, Sandro Hodel, Philipp Fürnstahl
{"title":"ArthroPhase: a novel dataset and method for phase recognition in arthroscopic video.","authors":"Ali Bahari Malayeri, Matthias Seibold, Nicola A Cavalcanti, Jonas Hein, Sascha Jecklin, Lazaros Vlachopoulos, Sandro Fucentese, Sandro Hodel, Philipp Fürnstahl","doi":"10.1080/24699322.2025.2508144","DOIUrl":"10.1080/24699322.2025.2508144","url":null,"abstract":"<p><p>This study advances surgical phase recognition in arthroscopic procedures, specifically Anterior Cruciate Ligament (ACL) reconstruction, by introducing the first arthroscopy dataset and a novel transformer-based model. We establish a benchmark for arthroscopic surgical phase recognition by leveraging spatio-temporal features to address challenges such as limited field of view, occlusions, and visual distortions. We developed the ACL27 dataset, comprising 27 videos of ACL surgeries, each labeled with surgical phases. Our model employs a transformer-based architecture, utilizing temporal-aware frame-wise feature extraction through ResNet-50 and transformer layers. This approach integrates spatio-temporal features and introduces a Surgical Progress Index (SPI) to quantify surgery progression. The model's performance was evaluated using accuracy, precision, recall, and Jaccard Index on the ACL27 and Cholec80 datasets. The proposed model achieved an overall accuracy of 72.9% on the ACL27 dataset. On the Cholec80 dataset, the model achieved performance comparable to state-of-the-art methods, with an accuracy of 92.4%. The SPI demonstrated an output error of 10.6% and 9.8% on ACL27 and Cholec80 datasets, respectively, indicating reliable surgery progression estimation. This study introduces a significant advancement in surgical phase recognition for arthroscopy, providing a comprehensive dataset and robust transformer-based model. The results validate the model's effectiveness and generalizability, highlighting its potential to improve surgical training, real-time assistance, and operational efficiency in orthopedic surgery. The publicly available dataset and code will facilitate future research in this critical field. Word Count: 6490.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2508144"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144192536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.","authors":"Aidana Massalimova, Florentin Liebmann, Sascha Jecklin, Fabio Carrillo, Mazda Farshad, Philipp Fürnstahl","doi":"10.1080/24699322.2025.2511126","DOIUrl":"10.1080/24699322.2025.2511126","url":null,"abstract":"<p><p>State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the \"mental map\" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2511126"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12312754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-05-24DOI: 10.1080/24699322.2025.2509686
Aurélie Comptour, Pauline Chauvet, Anne-Sophie Grémeau, Claire Figuier, Bruno Pereira, Matthieu Rouland, Prasad Samarakoon, Adrien Bartoli, Marie De Antonio, Nicolas Bourdel
{"title":"Retrospective case control study on the evaluation of the impact of augmented reality in gynecological laparoscopy on patients operated for myomectomy or adenomyomectomy.","authors":"Aurélie Comptour, Pauline Chauvet, Anne-Sophie Grémeau, Claire Figuier, Bruno Pereira, Matthieu Rouland, Prasad Samarakoon, Adrien Bartoli, Marie De Antonio, Nicolas Bourdel","doi":"10.1080/24699322.2025.2509686","DOIUrl":"10.1080/24699322.2025.2509686","url":null,"abstract":"<p><p>The objective of this study is to evaluate the safety of using augmented reality (AR) in laparoscopic (adeno)myomectomy, defined as an increase in operating time shorter than 15 min. A total of 17 AR cases underwent laparoscopic myomectomy or adenomyomectomy with the use of AR and 17 controls without AR for the resection of (adeno)myomas. The non-inferiority assumption was defined by an operative overtime not exceeding 15 min, representing 10% of the typical operative time. The 17 AR cases were matched to 17 controls. The criteria used in matching the two groups were the type of lesions, the size and the placement. The mean operative time was 135 ± 39 min for AR cases and 149 ± 62 min for controls. The margin of non-inferiority was expressed as a difference in operative time of 15 min between the case and control groups. The mean difference observed between AR cases and controls was -14 min with 90% CI [-38.3;11.3] and was significantly lower than the non-inferiority margin of 15 min (<i>p</i> = 0.03). This negative time difference means that the operative time is shorter for the AR cases group. Intraoperative data revealed a volume of bleeding ≤200 mL in 82.3% of AR cases and in 75% of controls (<i>p</i> = 0.62). No intra or postoperative complications were reported in the groups. The use of augmented reality in laparoscopic (adeno)myomectomy does not introduce additional constraints for the surgeon. It appears to be safe for the patients, with an absence of additional adverse events and of significantly prolonged operative time.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2509686"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the survival benefits of surgical treatment for pancreatic adenocarcinoma using the DeepSurv neural network model.","authors":"Xin Wang, Wenmao Yan, Jingdong Shi, Shi Cheng, Wei Yu, Hongyi Zhang","doi":"10.1080/24699322.2025.2556334","DOIUrl":"10.1080/24699322.2025.2556334","url":null,"abstract":"<p><p>To develop a DeepSurv model for predicting survival in pancreatic adenocarcinoma patients, evaluating the benefit of surgical versus non-surgical treatment across different stages, including stage IV subcategories. Clinical data were extracted from the SEER database (2000-2020). Patients were randomly divided into a model-building group and an experimental group. The DeepSurv model was trained and hyperparameter-optimized. Simulated paired data were created by switching treatment status. Predicted survival rates were compared using generalized estimating equations. SHAP values analyzed variable importance.The study included 16,068 patients. The final model achieved a C-index of 0.85. Surgical treatment yielded higher survival rates than non-surgical across all stages (p<0.001), though the benefit diminished in advanced stages. For stage IV, surgery improved survival in T1-3 and N0 stages (p<0.001) but not in T4 and N1. SHAP analysis ranked M stage as the most significant predictor of mortality, followed by T stage, overall stage, and surgical status. M1 metastasis was associated with a 14% increased mortality risk, while surgery reduced risk by 11%.Surgery reduces mortality across stages, with declining efficacy in advanced disease. For stage IV patients, surgery is beneficial except for those with T4 or N1 disease. Combining DeepSurv with SHAP analysis facilitates individualized prediction of surgical survival benefits.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2556334"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145014488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-09-20DOI: 10.1080/24699322.2025.2562871
Mehmet Süleyman Abul, Ömer Faruk Sevim
{"title":"Optimizing intraoperative video for surgical training: a comparative study of three recording techniques in hip arthroplasty.","authors":"Mehmet Süleyman Abul, Ömer Faruk Sevim","doi":"10.1080/24699322.2025.2562871","DOIUrl":"https://doi.org/10.1080/24699322.2025.2562871","url":null,"abstract":"<p><p>High-quality intraoperative video documentation is increasingly valued in surgery for its role in surgical evaluation, procedural archiving, and education. However, the comparative advantages of different recording methods have not been thoroughly examined. In this prospective, double-blinded study, 44 experienced orthopedic surgeons evaluated posterior total hip arthroplasty videos recorded using three techniques: a head-mounted camera, a light-handle-mounted camera, and an externally operated camera. All videos were captured by the same surgeon using standardized hardware and settings. Participants assessed video quality and educational value using a structured questionnaire. Data were analyzed using ANOVA and chi-square testing. The light-handle-mounted camera received the highest mean scores across all five evaluation domains, including visual clarity, image stability, and overall quality (mean scores ranging from 6.91 to 7.98). Repeated measures ANOVA confirmed statistically significant differences among the three camera techniques for all five questions (<i>p</i> = 0.022-0.043). Post hoc analysis revealed that the light-handle-mounted camera significantly outperformed the head-mounted system (<i>p</i> < 0.05 for all comparisons), while the external camera also demonstrated superiority over the head-mounted method. Chi-square testing showed a significant difference in educational suitability ratings (Question 6), with the light-handle-mounted system receiving the highest percentage of affirmative responses (79.5%) compared to the head-mounted (50.0%) and external cameras (31.8%) (<i>p</i> < 0.001). The light-handle-mounted system offered the most balanced solution, providing stable, high-quality recordings without disrupting sterility or workflow. While head-mounted and external methods have niche applications, their practical limitations reduce their suitability for routine documentation in procedures.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2562871"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-dimensional image-guided navigation technique for femoral artery puncture.","authors":"Yunmeng Zhang, Shenglin Liu, Qiang Zhang, Qingmin Feng","doi":"10.1080/24699322.2025.2535967","DOIUrl":"10.1080/24699322.2025.2535967","url":null,"abstract":"<p><p>Percutaneous femoral arterial access is a fundamental procedure in minimally invasive vascular interventions. However, inadequate visualization of the femoral artery may lead to inaccurate puncture and complications, with reported incidence rates of 3 to 18%. This study proposes a three-dimensional (3D) image-guided navigation system designed to enhance real-time visualization of the target vessel and puncture site during femoral artery access. This system employed an Iterative Closest Point (ICP)-based point cloud algorithm to achieve spatial registration between image space and patient space. An improved ICP method is implemented to optimize surface point cloud alignment, providing higher efficiency and accuracy compared to conventional approaches. Validation experiments were conducted using a standard model and a human phantom. Registration and navigation accuracy were quantified using fiducial registration error (FRE) for spatial alignment, target registration error (TRE) for navigation accuracy, and distance error for puncture precision. The system achieved a FRE of 0.944 mm. On the standard model, the average distance error was 0.885 mm, and the TRE was 0.915 mm. On the human phantom, the average distance error is 0.967 mm, and the average TRE is 0.981 mm. These results confirm the feasibility and effectiveness of the proposed 3D navigation system in guiding femoral artery puncture. All error metrics were within clinically acceptable thresholds, suggesting potential for improved procedural safety and precision in percutaneous vascular interventions.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2535967"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-02-16DOI: 10.1080/24699322.2025.2466424
Taylor B Winberg, Sheila Wang, James L Howard
{"title":"Imageless optical navigation system is clinically valid for total knee arthroplasty.","authors":"Taylor B Winberg, Sheila Wang, James L Howard","doi":"10.1080/24699322.2025.2466424","DOIUrl":"10.1080/24699322.2025.2466424","url":null,"abstract":"<p><p>Achieving optimal implant position and orientation during total knee arthroplasty (TKA) is a pivotal factor in long-term survival. Computer-assisted navigation (CAN) has been recognized as a trusted technology that improves the accuracy and consistency of femoral and tibial bone cuts. Imageless CAN offers advantages over image-based CAN by reducing cost, radiation exposure, and time. The purpose of this study was to evaluate the accuracy of an imageless optical navigation system for TKA in a clinical setting. Forty-two consecutive patients who underwent primary TKA with CAN were retrospectively reviewed. Femoral and tibial component coronal alignment was assessed <i>via</i> post-operative radiographs by two independent reviewers and compared against coronal alignment angles from the CAN. The primary outcome was the mean absolute difference of femoral and tibial varus/valgus angles between radiograph and intra-operative device measurements. Bland-Altman plots were used to assess agreement between the methods and statistically analyze potential systematic bias. The mean absolute differences between navigation-guided cut measurements and post-operative radiographs were 1.16 ± 1.03° and 1.76 ± 1.38° for femoral and tibial alignment respectively. About 88% of coronal measurements were within ±3°, while 99% were within ±5°. Bland-Altman analysis demonstrated a bias between CAN and radiographic measurements with CAN values averaging 0.52° (95% CI: 0.11°-0.93°) less than their paired radiographic measurements. This study demonstrated the ability of an optical imageless navigation system to measure, on average, femoral and tibial coronal cuts to within 2.0° of post-operative radiographic measurements in a clinical setting.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2466424"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143434411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted SurgeryPub Date : 2025-12-01Epub Date: 2025-08-18DOI: 10.1080/24699322.2025.2546819
Hafsa Moontari Ali, Yiming Xiao, Marta Kersten-Oertel
{"title":"Surgical hyperspectral imaging: a systematic review.","authors":"Hafsa Moontari Ali, Yiming Xiao, Marta Kersten-Oertel","doi":"10.1080/24699322.2025.2546819","DOIUrl":"10.1080/24699322.2025.2546819","url":null,"abstract":"<p><p>Hyperspectral imaging (HSI) is a technique that captures and processes information across a wide spectrum of wavelengths, providing detailed spectral data for each pixel in an image to identify and analyze materials or objects. In the surgical domain, it can provide quantitative and qualitative tissue information without the need of any contrast agent, thereby making it possible to distinguish between different tissue types objectively. In this article, we review the applications of hyperspectral imaging in surgery, focusing on: (1) hardware components and scanning mechanisms of HSI devices, (2) image preprocessing and processing/analysis methods, including classification, segmentation, tissue characterization, and perfusion analysis, and (3) the feasibility of HSI in various surgical procedures, based on human and animal studies. A systematic review of hyperspectral imaging based on PRISMA guideline was conducted using specific keywords: allintitle: hyperspectral AND intraoperative OR intervention OR surgery. After applying predefined inclusion and exclusion criteria, 85 papers from the literature were selected for analysis. Our systematic review shows that HSI has demonstrated significant potential as an intraoperative guidance tool, assisting surgeons during tumor resection by generating detailed tissue density maps. Additionally, HSI can play a role in hemodynamic monitoring, providing perfusion maps to assess blood flow during surgery and detect postoperative complications. Despite its promise, challenges, such as hardware limitations, real-time processing, and clinical integration remain, highlighting the need for further research and development to advance HSI in surgical applications.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2546819"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}