Healthcare Technology Letters最新文献

筛选
英文 中文
Adaptive non-invasive ventilation treatment for sleep apnea 自适应无创通气治疗睡眠呼吸暂停。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-05-26 DOI: 10.1049/htl2.12087
Fleur T. Tehrani, James H. Roum
{"title":"Adaptive non-invasive ventilation treatment for sleep apnea","authors":"Fleur T. Tehrani,&nbsp;James H. Roum","doi":"10.1049/htl2.12087","DOIUrl":"10.1049/htl2.12087","url":null,"abstract":"<p>The purpose of this study was to investigate the effectiveness of two non-invasive mechanical ventilation (NIV) modalities to treat sleep apnea: (1) Average Volume Assured Pressure Support (AVAPS) NIV, and (2) Pressure Support (PS) NIV with Continuously Calculated Average Required Ventilation (CCARV). Two detailed (previously developed and tested) simulation models were used to assess the effectiveness of the NIV modalities. One simulated subjects without chronic obstructive pulmonary disease (COPD), and the other simulated patients with COPD. Sleep apnea was simulated in each model (COPD and Non-COPD), and the ability of each NIV modality to normalize breathing was measured. In both NIV modalities, a low level continuous positive airway pressure was used and a backup respiratory rate was added to the algorithm in order to minimize the respiratory work rate. Both modalities could help normalize breathing in response to an episode of sleep apnea within about 5 min (during which time blood gases were within safe limits). AVAPS NIV and PS NIV with CCARV have potential value to be used for treatment of sleep apnea. Clinical evaluations are needed to fully assess the effectiveness of these NIV modalities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"283-288"},"PeriodicalIF":2.8,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial: Big data and artificial intelligence in healthcare 特邀社论:医疗保健中的大数据和人工智能。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-05-26 DOI: 10.1049/htl2.12086
Tim Hulsen, Francesca Manni
{"title":"Guest Editorial: Big data and artificial intelligence in healthcare","authors":"Tim Hulsen,&nbsp;Francesca Manni","doi":"10.1049/htl2.12086","DOIUrl":"10.1049/htl2.12086","url":null,"abstract":"&lt;p&gt;Big data refers to large datasets that can be mined and analysed using data science, statistics or machine learning (ML), often without defining a hypothesis upfront [&lt;span&gt;1&lt;/span&gt;]. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, which can use these big data to find patterns, to make predictions and even to generate new data or information [&lt;span&gt;2&lt;/span&gt;]. Big data has been used to improve healthcare [&lt;span&gt;3&lt;/span&gt;] and medicine [&lt;span&gt;1&lt;/span&gt;] already for many years, by enabling researchers and medical professionals to draw conclusions from large and rich datasets rather than from clinical trials based on a small number of patients. More recently, AI has been used in healthcare as well, for example by finding and classifying tumours in magnetic resonance images (MRI) [&lt;span&gt;4&lt;/span&gt;] or by improving and automating the clinical workflow [&lt;span&gt;5&lt;/span&gt;]. This uptake of AI in healthcare is still increasing, as new models and techniques are being introduced. For example, the creation of large language models (LLMs) such as ChatGPT enables the use of generative AI (GenAI) in healthcare [&lt;span&gt;6&lt;/span&gt;]. GenAI can be used to create synthetic data (where the original data has privacy issues), generate radiology or pathology reports, or create chatbots to interact with the patient. The expectation is that the application of AI in healthcare will get even more important, as hospitals are suffering from personnel shortages and increasing numbers of elderly people needing care. The rise of AI in healthcare also comes with some challenges. Especially in healthcare, we want to know what the AI algorithm is doing; it should not be a ‘black box’. Explainable AI (XAI) can help the medical professional (or even the patient) to understand why the AI algorithm makes a certain decision, increasing trust in the result or prediction [&lt;span&gt;7&lt;/span&gt;]. It is also important that AI works according to privacy laws, is free from bias, and does not produce toxic language (in case of a medical chatbot). Responsible AI (RAI) tries to prevent these issues by providing a framework of ethical principles [&lt;span&gt;8&lt;/span&gt;]. By embracing the (current and future) technical possibilities AI has to offer, and at the same time making sure that AI is explainable and responsible, we can make sure that hospitals are able to withstand any future challenges.&lt;/p&gt;&lt;p&gt;This Special Issue contains six papers, all of which underwent peer review. One paper is about increasing the transparency of machine learning models, one is about cardiac disease risk prediction, and another one is about depression detection in Roman Urdu social media posts. The other papers are about autism spectrum disorder detection using facial images, hybrid brain tumour classification of histopathology hyperspectral images, and prediction of the utilization of invasive and non-invasive ventilation throughout the intensive care unit (ICU) duration.&lt;/p&gt;&lt;p&gt;Lisboa disc","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"207-209"},"PeriodicalIF":2.8,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrical impedance tomography image reconstruction for lung monitoring based on ensemble learning algorithms1 基于集合学习算法的用于肺部监测的电阻抗断层图像重建。
IF 2.8
Healthcare Technology Letters Pub Date : 2024-04-30 DOI: 10.1049/htl2.12085
Areen K. Al-Bashir, Duha H. Al-Bataiha, Mariem Hafsa, Mohammad A. Al-Abed, Olfa Kanoun
{"title":"Electrical impedance tomography image reconstruction for lung monitoring based on ensemble learning algorithms1","authors":"Areen K. Al-Bashir,&nbsp;Duha H. Al-Bataiha,&nbsp;Mariem Hafsa,&nbsp;Mohammad A. Al-Abed,&nbsp;Olfa Kanoun","doi":"10.1049/htl2.12085","DOIUrl":"10.1049/htl2.12085","url":null,"abstract":"<p>Electrical impedance tomography (EIT) is a promising non-invasive imaging technique that visualizes the electrical conductivity of an anatomic structure to form based on measured boundary voltages. However, the EIT inverse problem for the image reconstruction is nonlinear and highly ill-posed. Therefore, in this work, a simulated dataset that mimics the human thorax was generated with boundary voltages based on given conductivity distributions. To overcome the challenges of image reconstruction, an ensemble learning method was proposed. The ensemble method combines several convolutional neural network models, which are the simple Convolutional Neural Network (CNN) model, AlexNet, AlexNet with residual block, and the modified AlexNet model. The ensemble models’ weights selection was based on average technique giving the best coefficient of determination (R<sup>2</sup> score). The reconstruction quality is quantitatively evaluated by calculating the root mean square error (RMSE), the coefficient of determination (R<sup>2</sup> score), and the image correlation coefficient (ICC). The proposed method's best performance is an RMSE of 0.09404, an R<sup>2</sup> score of 0.926186, and an ICC of 0.95783 using an ensemble model. The proposed method is promising as it can construct valuable images for clinical EIT applications and measurements compared to previous studies.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"271-282"},"PeriodicalIF":2.8,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed reality guided root canal therapy 混合现实引导根管治疗
IF 2.1
Healthcare Technology Letters Pub Date : 2024-04-02 DOI: 10.1049/htl2.12077
Fangjie Li, Qingying Gao, Nengyu Wang, Nicholas Greene, Tianyu Song, Omid Dianat, Ehsan Azimi
{"title":"Mixed reality guided root canal therapy","authors":"Fangjie Li,&nbsp;Qingying Gao,&nbsp;Nengyu Wang,&nbsp;Nicholas Greene,&nbsp;Tianyu Song,&nbsp;Omid Dianat,&nbsp;Ehsan Azimi","doi":"10.1049/htl2.12077","DOIUrl":"https://doi.org/10.1049/htl2.12077","url":null,"abstract":"<p>Root canal therapy (RCT) is a widely performed procedure in dentistry, with over 25 million individuals undergoing it annually. This procedure is carried out to address inflammation or infection within the root canal system of affected teeth. However, accurately aligning CT scan information with the patient's tooth has posed challenges, leading to errors in tool positioning and potential negative outcomes. To overcome these challenges, a mixed reality application is developed using an optical see-through head-mounted display (OST-HMD). The application incorporates visual cues, an augmented mirror, and dynamically updated multi-view CT slices to address depth perception issues and achieve accurate tooth localization, comprehensive canal exploration, and prevention of perforation during RCT. Through the preliminary experimental assessment, significant improvements in the accuracy of the procedure are observed. Specifically, with the system the accuracy in position was improved from 1.4 to 0.4 mm (more than a 70% gain) using an Optical Tracker (NDI) and from 2.8 to 2.4 mm using an HMD, thereby achieving submillimeter accuracy with NDI. 6 participants were enrolled in the user study. The result of the study suggests that the average displacement on the crown plane of 1.27 ± 0.83 cm, an average depth error of 0.90 ± 0.72 cm and an average angular deviation of 1.83 ± 0.83°. Our error analysis further highlights the impact of HMD spatial localization and head motion on the registration and calibration process. Through seamless integration of CT image information with the patient's tooth, our mixed reality application assists dentists in achieving precise tool placement. This advancement in technology has the potential to elevate the quality of root canal procedures, ensuring better accuracy and enhancing overall treatment outcomes.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"167-178"},"PeriodicalIF":2.1,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed reality based teleoperation and visualization of surgical robotics 基于混合现实技术的手术机器人远程操作和可视化
IF 2.1
Healthcare Technology Letters Pub Date : 2024-03-28 DOI: 10.1049/htl2.12079
Letian Ai, Peter Kazanzides, Ehsan Azimi
{"title":"Mixed reality based teleoperation and visualization of surgical robotics","authors":"Letian Ai,&nbsp;Peter Kazanzides,&nbsp;Ehsan Azimi","doi":"10.1049/htl2.12079","DOIUrl":"10.1049/htl2.12079","url":null,"abstract":"<p>Surgical robotics has revolutionized the field of surgery, facilitating complex procedures in operating rooms. However, the current teleoperation systems often rely on bulky consoles, which limit the mobility of surgeons. This restriction reduces surgeons' awareness of the patient during procedures and narrows the range of implementation scenarios. To address these challenges, an alternative solution is proposed: a mixed reality-based teleoperation system. This system leverages hand gestures, head motion tracking, and speech commands to enable the teleoperation of surgical robots. The implementation focuses on the da Vinci research kit (dVRK) and utilizes the capabilities of Microsoft HoloLens 2. The system's effectiveness is evaluated through camera navigation tasks and peg transfer tasks. The results indicate that, in comparison to manipulator-based teleoperation, the system demonstrates comparable viability in endoscope teleoperation. However, it falls short in instrument teleoperation, highlighting the need for further improvements in hand gesture recognition and video display quality.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"179-188"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140372859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration-free structured-light-based 3D scanning system in laparoscope for robotic surgery 用于机器人手术的腹腔镜无校准结构光三维扫描系统
IF 2.1
Healthcare Technology Letters Pub Date : 2024-03-08 DOI: 10.1049/htl2.12083
Ryo Furukawa, Elvis Chen, Ryusuke Sagawa, Shiro Oka, Hiroshi Kawasaki
{"title":"Calibration-free structured-light-based 3D scanning system in laparoscope for robotic surgery","authors":"Ryo Furukawa,&nbsp;Elvis Chen,&nbsp;Ryusuke Sagawa,&nbsp;Shiro Oka,&nbsp;Hiroshi Kawasaki","doi":"10.1049/htl2.12083","DOIUrl":"10.1049/htl2.12083","url":null,"abstract":"<p>Accurate 3D shape measurement is crucial for surgical support and alignment in robotic surgery systems. Stereo cameras in laparoscopes offer a potential solution; however, their accuracy in stereo image matching diminishes when the target image has few textures. Although stereo matching with deep learning has gained significant attention, supervised learning requires a large dataset of images with depth annotations, which are scarce for laparoscopes. Thus, there is a strong demand to explore alternative methods for depth reconstruction or annotation for laparoscopes. Active stereo techniques are a promising approach for achieving 3D reconstruction without textures. In this study, a 3D shape reconstruction method is proposed using an ultra-small patterned projector attached to a laparoscopic arm to address these issues. The pattern projector emits a structured light with a grid-like pattern that features node-wise modulation for positional encoding. To scan the target object, multiple images are taken while the projector is in motion, and the relative poses of the projector and a camera are auto-calibrated using a differential rendering technique. In the experiment, the proposed method is evaluated by performing 3D reconstruction using images obtained from a surgical robot and comparing the results with a ground-truth shape obtained from X-ray CT.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"196-205"},"PeriodicalIF":2.1,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140256969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Papers from the 17th Joint Workshop on Augmented Environments for Computer Assisted Interventions at MICCAI 2023: Guest Editors’ Foreword 2023 计算机辅助干预的增强环境(AE-CAI)特刊:特邀编辑前言
IF 2.1
Healthcare Technology Letters Pub Date : 2024-02-28 DOI: 10.1049/htl2.12082
Cristian A. Linte, Ziv Yaniv, Elvis Chen, Qi Dou, Simon Drouin, Megha Kalia, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya
{"title":"Papers from the 17th Joint Workshop on Augmented Environments for Computer Assisted Interventions at MICCAI 2023: Guest Editors’ Foreword","authors":"Cristian A. Linte,&nbsp;Ziv Yaniv,&nbsp;Elvis Chen,&nbsp;Qi Dou,&nbsp;Simon Drouin,&nbsp;Megha Kalia,&nbsp;Marta Kersten-Oertel,&nbsp;Jonathan McLeod,&nbsp;Duygu Sarikaya","doi":"10.1049/htl2.12082","DOIUrl":"10.1049/htl2.12082","url":null,"abstract":"&lt;p&gt;Welcome to this Special Issue of Wiley's Healthcare Technology Letters (HTL) journal dedicated to the 2023 edition of the Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer Assisted and Robotic Endoscopy (CARE), and Context-aware Operating Theatres (OR 2.0) joint workshop. We are pleased to present the proceedings of this exciting scientific gathering held in conjunction with the Medical Image Computing and Computer-Assisted Interventions (MICCAI) conference on 8 October 2023 in Vancouver, British Columbia, Canada.&lt;/p&gt;&lt;p&gt;Over the past several years, the satellite workshops and tutorials at MICCAI have experienced increased popularity. This year's workshop brings together three communities that joined forces for the first time in February 2020 for a MICCAI 2020 Joint Workshop, in light of our common interests in image guidance, navigation and visualization for computer-assisted interventions and have continued this joint venture legacy every year since.&lt;/p&gt;&lt;p&gt;The 2023 edition of AE-CAI | CARE | OR 2.0 was a joint event between the series of MICCAI-affiliated AE-CAI workshops founded in 2006 and now on its 17th edition, the CARE workshop series, now on its 10th edition, and the OR 2.0 workshop now on its 5rd edition. This year's edition of the workshop featured 20 accepted submissions and reached more than 70 registrants, not including the members of the organizing and program committees, making AE-CAI | CARE | OR 2.0 one of the best received and best attended workshops with more than a decade-long standing tradition at MICCAI 2023.&lt;/p&gt;&lt;p&gt;Computer-Assisted Interventions (CAI) is a field of research and practice, where medical interventions are supported by computer-based tools and methodologies. CAI systems enable more precise, safer, and less invasive interventional treatments by providing enhanced planning, real-time visualization, instrument guidance and navigation, as well as situation awareness and cognition. These research domains have been motivated by the development of medical imaging and its evolution from being primarily a diagnostic modality towards its use as a therapeutic and interventional aid, driven by the need to streamline the diagnostic and therapeutic processes via minimally invasive visualization and therapy. To promote this field of research, our workshop seeks to showcase papers that disseminate novel theoretical algorithms, technical implementations, and development and validation of integrated hardware and software systems in the context of their dedicated clinical applications. The workshop attracts researchers in computer science, biomedical engineering, computer vision, robotics, and medical imaging.&lt;/p&gt;&lt;p&gt;The workshop was hosted as a single track, in person, event, where all accepted papers were featured as a podium presentation as part of three sessions: &lt;i&gt;Endoscopy Applications, AR/VR/MR Applications&lt;/i&gt;, and &lt;i&gt;Surgical Data Science&lt;/i&gt;. To foster networking and discussion, all authors","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"31-32"},"PeriodicalIF":2.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contactless and short-range vital signs detection with doppler radar millimetre-wave (76–81 GHz) sensing firmware 利用多普勒雷达毫米波(76-81 GHz)传感固件进行非接触式和短距离生命体征检测
IF 2.8
Healthcare Technology Letters Pub Date : 2024-02-27 DOI: 10.1049/htl2.12075
Pi-Yun Chen, Hsu-Yung Lin, Zi-Heng Zhong, Neng-Sheng Pai, Chien-Ming Li, Chia-Hung Lin
{"title":"Contactless and short-range vital signs detection with doppler radar millimetre-wave (76–81 GHz) sensing firmware","authors":"Pi-Yun Chen,&nbsp;Hsu-Yung Lin,&nbsp;Zi-Heng Zhong,&nbsp;Neng-Sheng Pai,&nbsp;Chien-Ming Li,&nbsp;Chia-Hung Lin","doi":"10.1049/htl2.12075","DOIUrl":"10.1049/htl2.12075","url":null,"abstract":"<p>Vital signs such as heart rate (HR) and respiration rate (RR) are essential physiological parameters that are routinely used to monitor human health and bodily functions. They can be continuously monitored through contact or contactless measurements performed in the home or a hospital. In this study, a contactless Doppler radar W-band sensing system was used for short-range, contactless vital sign estimation. Frequency-modulated continuous wave (FMCW) measurements were performed to reduce the influence of a patient's micromotion. Sensing software was developed that can process the received chirps to filter and extract heartbeat and breathing rhythm signals. The proposed contactless sensing system eliminates the need for the contact electrodes, electric patches, photoelectric sensors, and conductive wires used in typical physiological sensing methods. The system operates at 76–81 GHz in FMCW mode and can detect objects on the basis of changes in frequency and phase. The obtained signals are used to precisely monitor a patient's HR and RR with minimal noise interference. In a laboratory setting, the heartbeats and breathing rhythm signals of healthy young participants were measured, and their HR and RR were estimated through frequency- and time-domain analyses. The experimental results confirmed the feasibility of the proposed W-band mm-wave radar for contactless and short-range continuous detection of human vital signs.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"427-436"},"PeriodicalIF":2.8,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140426410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning modelling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration 用于预测整个重症监护室期间有创和无创通气使用情况的机器学习模型
IF 2.8
Healthcare Technology Letters Pub Date : 2024-02-20 DOI: 10.1049/htl2.12081
Emma Schwager, Mohsen Nabian, Xinggang Liu, Ting Feng, Robin French, Pam Amelung, Louis Atallah, Omar Badawi
{"title":"Machine learning modelling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration","authors":"Emma Schwager,&nbsp;Mohsen Nabian,&nbsp;Xinggang Liu,&nbsp;Ting Feng,&nbsp;Robin French,&nbsp;Pam Amelung,&nbsp;Louis Atallah,&nbsp;Omar Badawi","doi":"10.1049/htl2.12081","DOIUrl":"10.1049/htl2.12081","url":null,"abstract":"<p>The goal of this work is to develop a Machine Learning model to predict the need for both invasive and non-invasive mechanical ventilation in intensive care unit (ICU) patients. Using the Philips eICU Research Institute (ERI) database, 2.6 million ICU patient data from 2010 to 2019 were analyzed. This data was randomly split into training (63%), validation (27%), and test (10%) sets. Additionally, an external test set from a single hospital from the ERI database was employed to assess the model's generalizability. Model performance was determined by comparing the model probability predictions with the actual incidence of ventilation use, either invasive or non-invasive. The model demonstrated a prediction performance with an AUC of 0.921 for overall ventilation, 0.937 for invasive, and 0.827 for non-invasive. Factors such as high Glasgow Coma Scores, younger age, lower BMI, and lower PaCO2 were highlighted as indicators of a lower likelihood for the need for ventilation. The model can serve as a retrospective benchmarking tool for hospitals to assess ICU performance concerning mechanical ventilation necessity. It also enables analysis of ventilation strategy trends and risk-adjusted comparisons, with potential for future testing as a clinical decision tool for optimizing ICU ventilation management.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"252-257"},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140448519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance evaluation in cataract surgery with an ensemble of 2D–3D convolutional neural networks 利用 2D-3D 卷积神经网络集合进行白内障手术的性能评估
IF 2.1
Healthcare Technology Letters Pub Date : 2024-02-17 DOI: 10.1049/htl2.12078
Ummey Tanin, Adrienne Duimering, Christine Law, Jessica Ruzicki, Gabriela Luna, Matthew Holden
{"title":"Performance evaluation in cataract surgery with an ensemble of 2D–3D convolutional neural networks","authors":"Ummey Tanin,&nbsp;Adrienne Duimering,&nbsp;Christine Law,&nbsp;Jessica Ruzicki,&nbsp;Gabriela Luna,&nbsp;Matthew Holden","doi":"10.1049/htl2.12078","DOIUrl":"10.1049/htl2.12078","url":null,"abstract":"<p>An important part of surgical training in ophthalmology is understanding how to proficiently perform cataract surgery. Operating skill in cataract surgery is typically assessed by real-time or video-based expert review using a rating scale. This is time-consuming, subjective and labour-intensive. A typical trainee graduates with over 100 complete surgeries, each of which requires review by the surgical educators. Due to the consistently repetitive nature of this task, it lends itself well to machine learning-based evaluation. Recent studies utilize deep learning models trained on tool motion trajectories obtained using additional equipment or robotic systems. However, the process of tool recognition by extracting frames from the videos to perform phase recognition followed by skill assessment is exhaustive. This project proposes a deep learning model for skill evaluation using raw surgery videos that is cost-effective and end-to-end trainable. An advanced ensemble of convolutional neural network models is leveraged to model technical skills in cataract surgeries and is evaluated using a large dataset comprising almost 200 surgical trials. The highest accuracy of 0.8494 is observed on the phacoemulsification step data. Our model yielded an average accuracy of 0.8200 and an average AUC score of 0.8800 for all four phase datasets of cataract surgery proving its robustness against different data. The proposed ensemble model with 2D and 3D convolutional neural networks demonstrated a promising result without using tool motion trajectories to evaluate surgery expertise.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"189-195"},"PeriodicalIF":2.1,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12078","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139960625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信