Khalad Agali, Maslin Masrom, Fiza Abdul Rahim, Yazriwati Yahya
{"title":"IoT-based remote monitoring system: A new era for patient engagement","authors":"Khalad Agali, Maslin Masrom, Fiza Abdul Rahim, Yazriwati Yahya","doi":"10.1049/htl2.12089","DOIUrl":"10.1049/htl2.12089","url":null,"abstract":"<p>Internet of Things (IoT) is changing patient engagement in healthcare by shifting from traditional care models to a continuous, technology-driven approach using IoT-based Remote Monitoring Systems (IoT-RMS). This research seeks to redefine patient engagement by examining how Internet of Things (IoT) technologies can impact healthcare management and patient–provider interactions at different phases. Additionally, it presents the relationship between patient engagement stages and IoT-RMS, which promotes patients' active participation using technological health management tools. The study emphasizes that IoT-RMS improves patient engagement, organized into three main stages: enabling, engaging, and empowering. This approach shows how technological progress encourages patient involvement and empowerment, leading to improved health results and personalized care. A systematic review and narrative analysis of Web of Science (WOS), Scopus databases, IEEE, and PubMed yielded 1832 studies regarding patient engagement and technology. Despite the optimistic findings, the article highlights the need for more research to evaluate the durability of technology interventions and long-term effectiveness.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 6","pages":"437-446"},"PeriodicalIF":2.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inés A. Cruz-Guerrero, Daniel Ulises Campos-Delgado, Aldo R. Mejía-Rodríguez, Raquel Leon, Samuel Ortega, Himar Fabelo, Rafael Camacho, Maria de la Luz Plaza, Gustavo Callico
{"title":"Hybrid brain tumor classification of histopathology hyperspectral images by linear unmixing and an ensemble of deep neural networks","authors":"Inés A. Cruz-Guerrero, Daniel Ulises Campos-Delgado, Aldo R. Mejía-Rodríguez, Raquel Leon, Samuel Ortega, Himar Fabelo, Rafael Camacho, Maria de la Luz Plaza, Gustavo Callico","doi":"10.1049/htl2.12084","DOIUrl":"10.1049/htl2.12084","url":null,"abstract":"<p>Hyperspectral imaging has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. In this work, we propose a hybrid scheme to classify non-tumor and tumor histological brain samples by hyperspectral imaging. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Hence, the work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"240-251"},"PeriodicalIF":2.8,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filza Rehmani, Qaisar Shaheen, Muhammad Anwar, Muhammad Faheem, Shahzad Sarwar Bhatti
{"title":"Depression detection with machine learning of structural and non-structural dual languages","authors":"Filza Rehmani, Qaisar Shaheen, Muhammad Anwar, Muhammad Faheem, Shahzad Sarwar Bhatti","doi":"10.1049/htl2.12088","DOIUrl":"10.1049/htl2.12088","url":null,"abstract":"<p>Depression is a serious mental state that negatively impacts thoughts, feelings, and actions. Social media use is rapidly growing, with people expressing themselves in their regional languages. In Pakistan and India, many people use Roman Urdu on social media. This makes Roman Urdu important for predicting depression in these regions. However, previous studies show no significant contribution in predicting depression through Roman Urdu or in combination with structured languages like English. The study aims to create a Roman Urdu dataset to predict depression risk in dual languages [Roman Urdu (non-structural language) + English (structural language)]. Two datasets were used: Roman Urdu data manually converted from English on Facebook, and English comments from Kaggle. These datasets were merged for the research experiments. Machine learning models, including Support Vector Machine (SVM), Support Vector Machine Radial Basis Function (SVM-RBF), Random Forest (RF), and Bidirectional Encoder Representations from Transformers (BERT), were tested. Depression risk was classified into not depressed, moderate, and severe. Experimental studies show that the SVM achieved the best result with anaccuracy of 0.84% compared to existing models. The presented study refines thearea of depression to predict the depression in Asian countries.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"218-226"},"PeriodicalIF":2.8,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive non-invasive ventilation treatment for sleep apnea","authors":"Fleur T. Tehrani, James H. Roum","doi":"10.1049/htl2.12087","DOIUrl":"10.1049/htl2.12087","url":null,"abstract":"<p>The purpose of this study was to investigate the effectiveness of two non-invasive mechanical ventilation (NIV) modalities to treat sleep apnea: (1) Average Volume Assured Pressure Support (AVAPS) NIV, and (2) Pressure Support (PS) NIV with Continuously Calculated Average Required Ventilation (CCARV). Two detailed (previously developed and tested) simulation models were used to assess the effectiveness of the NIV modalities. One simulated subjects without chronic obstructive pulmonary disease (COPD), and the other simulated patients with COPD. Sleep apnea was simulated in each model (COPD and Non-COPD), and the ability of each NIV modality to normalize breathing was measured. In both NIV modalities, a low level continuous positive airway pressure was used and a backup respiratory rate was added to the algorithm in order to minimize the respiratory work rate. Both modalities could help normalize breathing in response to an episode of sleep apnea within about 5 min (during which time blood gases were within safe limits). AVAPS NIV and PS NIV with CCARV have potential value to be used for treatment of sleep apnea. Clinical evaluations are needed to fully assess the effectiveness of these NIV modalities.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"283-288"},"PeriodicalIF":2.8,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editorial: Big data and artificial intelligence in healthcare","authors":"Tim Hulsen, Francesca Manni","doi":"10.1049/htl2.12086","DOIUrl":"10.1049/htl2.12086","url":null,"abstract":"<p>Big data refers to large datasets that can be mined and analysed using data science, statistics or machine learning (ML), often without defining a hypothesis upfront [<span>1</span>]. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, which can use these big data to find patterns, to make predictions and even to generate new data or information [<span>2</span>]. Big data has been used to improve healthcare [<span>3</span>] and medicine [<span>1</span>] already for many years, by enabling researchers and medical professionals to draw conclusions from large and rich datasets rather than from clinical trials based on a small number of patients. More recently, AI has been used in healthcare as well, for example by finding and classifying tumours in magnetic resonance images (MRI) [<span>4</span>] or by improving and automating the clinical workflow [<span>5</span>]. This uptake of AI in healthcare is still increasing, as new models and techniques are being introduced. For example, the creation of large language models (LLMs) such as ChatGPT enables the use of generative AI (GenAI) in healthcare [<span>6</span>]. GenAI can be used to create synthetic data (where the original data has privacy issues), generate radiology or pathology reports, or create chatbots to interact with the patient. The expectation is that the application of AI in healthcare will get even more important, as hospitals are suffering from personnel shortages and increasing numbers of elderly people needing care. The rise of AI in healthcare also comes with some challenges. Especially in healthcare, we want to know what the AI algorithm is doing; it should not be a ‘black box’. Explainable AI (XAI) can help the medical professional (or even the patient) to understand why the AI algorithm makes a certain decision, increasing trust in the result or prediction [<span>7</span>]. It is also important that AI works according to privacy laws, is free from bias, and does not produce toxic language (in case of a medical chatbot). Responsible AI (RAI) tries to prevent these issues by providing a framework of ethical principles [<span>8</span>]. By embracing the (current and future) technical possibilities AI has to offer, and at the same time making sure that AI is explainable and responsible, we can make sure that hospitals are able to withstand any future challenges.</p><p>This Special Issue contains six papers, all of which underwent peer review. One paper is about increasing the transparency of machine learning models, one is about cardiac disease risk prediction, and another one is about depression detection in Roman Urdu social media posts. The other papers are about autism spectrum disorder detection using facial images, hybrid brain tumour classification of histopathology hyperspectral images, and prediction of the utilization of invasive and non-invasive ventilation throughout the intensive care unit (ICU) duration.</p><p>Lisboa disc","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"207-209"},"PeriodicalIF":2.8,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Areen K. Al-Bashir, Duha H. Al-Bataiha, Mariem Hafsa, Mohammad A. Al-Abed, Olfa Kanoun
{"title":"Electrical impedance tomography image reconstruction for lung monitoring based on ensemble learning algorithms1","authors":"Areen K. Al-Bashir, Duha H. Al-Bataiha, Mariem Hafsa, Mohammad A. Al-Abed, Olfa Kanoun","doi":"10.1049/htl2.12085","DOIUrl":"10.1049/htl2.12085","url":null,"abstract":"<p>Electrical impedance tomography (EIT) is a promising non-invasive imaging technique that visualizes the electrical conductivity of an anatomic structure to form based on measured boundary voltages. However, the EIT inverse problem for the image reconstruction is nonlinear and highly ill-posed. Therefore, in this work, a simulated dataset that mimics the human thorax was generated with boundary voltages based on given conductivity distributions. To overcome the challenges of image reconstruction, an ensemble learning method was proposed. The ensemble method combines several convolutional neural network models, which are the simple Convolutional Neural Network (CNN) model, AlexNet, AlexNet with residual block, and the modified AlexNet model. The ensemble models’ weights selection was based on average technique giving the best coefficient of determination (R<sup>2</sup> score). The reconstruction quality is quantitatively evaluated by calculating the root mean square error (RMSE), the coefficient of determination (R<sup>2</sup> score), and the image correlation coefficient (ICC). The proposed method's best performance is an RMSE of 0.09404, an R<sup>2</sup> score of 0.926186, and an ICC of 0.95783 using an ensemble model. The proposed method is promising as it can construct valuable images for clinical EIT applications and measurements compared to previous studies.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 5","pages":"271-282"},"PeriodicalIF":2.8,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed reality guided root canal therapy","authors":"Fangjie Li, Qingying Gao, Nengyu Wang, Nicholas Greene, Tianyu Song, Omid Dianat, Ehsan Azimi","doi":"10.1049/htl2.12077","DOIUrl":"https://doi.org/10.1049/htl2.12077","url":null,"abstract":"<p>Root canal therapy (RCT) is a widely performed procedure in dentistry, with over 25 million individuals undergoing it annually. This procedure is carried out to address inflammation or infection within the root canal system of affected teeth. However, accurately aligning CT scan information with the patient's tooth has posed challenges, leading to errors in tool positioning and potential negative outcomes. To overcome these challenges, a mixed reality application is developed using an optical see-through head-mounted display (OST-HMD). The application incorporates visual cues, an augmented mirror, and dynamically updated multi-view CT slices to address depth perception issues and achieve accurate tooth localization, comprehensive canal exploration, and prevention of perforation during RCT. Through the preliminary experimental assessment, significant improvements in the accuracy of the procedure are observed. Specifically, with the system the accuracy in position was improved from 1.4 to 0.4 mm (more than a 70% gain) using an Optical Tracker (NDI) and from 2.8 to 2.4 mm using an HMD, thereby achieving submillimeter accuracy with NDI. 6 participants were enrolled in the user study. The result of the study suggests that the average displacement on the crown plane of 1.27 ± 0.83 cm, an average depth error of 0.90 ± 0.72 cm and an average angular deviation of 1.83 ± 0.83°. Our error analysis further highlights the impact of HMD spatial localization and head motion on the registration and calibration process. Through seamless integration of CT image information with the patient's tooth, our mixed reality application assists dentists in achieving precise tool placement. This advancement in technology has the potential to elevate the quality of root canal procedures, ensuring better accuracy and enhancing overall treatment outcomes.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"167-178"},"PeriodicalIF":2.1,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed reality based teleoperation and visualization of surgical robotics","authors":"Letian Ai, Peter Kazanzides, Ehsan Azimi","doi":"10.1049/htl2.12079","DOIUrl":"10.1049/htl2.12079","url":null,"abstract":"<p>Surgical robotics has revolutionized the field of surgery, facilitating complex procedures in operating rooms. However, the current teleoperation systems often rely on bulky consoles, which limit the mobility of surgeons. This restriction reduces surgeons' awareness of the patient during procedures and narrows the range of implementation scenarios. To address these challenges, an alternative solution is proposed: a mixed reality-based teleoperation system. This system leverages hand gestures, head motion tracking, and speech commands to enable the teleoperation of surgical robots. The implementation focuses on the da Vinci research kit (dVRK) and utilizes the capabilities of Microsoft HoloLens 2. The system's effectiveness is evaluated through camera navigation tasks and peg transfer tasks. The results indicate that, in comparison to manipulator-based teleoperation, the system demonstrates comparable viability in endoscope teleoperation. However, it falls short in instrument teleoperation, highlighting the need for further improvements in hand gesture recognition and video display quality.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"179-188"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140372859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryo Furukawa, Elvis Chen, Ryusuke Sagawa, Shiro Oka, Hiroshi Kawasaki
{"title":"Calibration-free structured-light-based 3D scanning system in laparoscope for robotic surgery","authors":"Ryo Furukawa, Elvis Chen, Ryusuke Sagawa, Shiro Oka, Hiroshi Kawasaki","doi":"10.1049/htl2.12083","DOIUrl":"10.1049/htl2.12083","url":null,"abstract":"<p>Accurate 3D shape measurement is crucial for surgical support and alignment in robotic surgery systems. Stereo cameras in laparoscopes offer a potential solution; however, their accuracy in stereo image matching diminishes when the target image has few textures. Although stereo matching with deep learning has gained significant attention, supervised learning requires a large dataset of images with depth annotations, which are scarce for laparoscopes. Thus, there is a strong demand to explore alternative methods for depth reconstruction or annotation for laparoscopes. Active stereo techniques are a promising approach for achieving 3D reconstruction without textures. In this study, a 3D shape reconstruction method is proposed using an ultra-small patterned projector attached to a laparoscopic arm to address these issues. The pattern projector emits a structured light with a grid-like pattern that features node-wise modulation for positional encoding. To scan the target object, multiple images are taken while the projector is in motion, and the relative poses of the projector and a camera are auto-calibrated using a differential rendering technique. In the experiment, the proposed method is evaluated by performing 3D reconstruction using images obtained from a surgical robot and comparing the results with a ground-truth shape obtained from X-ray CT.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"196-205"},"PeriodicalIF":2.1,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140256969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristian A. Linte, Ziv Yaniv, Elvis Chen, Qi Dou, Simon Drouin, Megha Kalia, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya
{"title":"Papers from the 17th Joint Workshop on Augmented Environments for Computer Assisted Interventions at MICCAI 2023: Guest Editors’ Foreword","authors":"Cristian A. Linte, Ziv Yaniv, Elvis Chen, Qi Dou, Simon Drouin, Megha Kalia, Marta Kersten-Oertel, Jonathan McLeod, Duygu Sarikaya","doi":"10.1049/htl2.12082","DOIUrl":"10.1049/htl2.12082","url":null,"abstract":"<p>Welcome to this Special Issue of Wiley's Healthcare Technology Letters (HTL) journal dedicated to the 2023 edition of the Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer Assisted and Robotic Endoscopy (CARE), and Context-aware Operating Theatres (OR 2.0) joint workshop. We are pleased to present the proceedings of this exciting scientific gathering held in conjunction with the Medical Image Computing and Computer-Assisted Interventions (MICCAI) conference on 8 October 2023 in Vancouver, British Columbia, Canada.</p><p>Over the past several years, the satellite workshops and tutorials at MICCAI have experienced increased popularity. This year's workshop brings together three communities that joined forces for the first time in February 2020 for a MICCAI 2020 Joint Workshop, in light of our common interests in image guidance, navigation and visualization for computer-assisted interventions and have continued this joint venture legacy every year since.</p><p>The 2023 edition of AE-CAI | CARE | OR 2.0 was a joint event between the series of MICCAI-affiliated AE-CAI workshops founded in 2006 and now on its 17th edition, the CARE workshop series, now on its 10th edition, and the OR 2.0 workshop now on its 5rd edition. This year's edition of the workshop featured 20 accepted submissions and reached more than 70 registrants, not including the members of the organizing and program committees, making AE-CAI | CARE | OR 2.0 one of the best received and best attended workshops with more than a decade-long standing tradition at MICCAI 2023.</p><p>Computer-Assisted Interventions (CAI) is a field of research and practice, where medical interventions are supported by computer-based tools and methodologies. CAI systems enable more precise, safer, and less invasive interventional treatments by providing enhanced planning, real-time visualization, instrument guidance and navigation, as well as situation awareness and cognition. These research domains have been motivated by the development of medical imaging and its evolution from being primarily a diagnostic modality towards its use as a therapeutic and interventional aid, driven by the need to streamline the diagnostic and therapeutic processes via minimally invasive visualization and therapy. To promote this field of research, our workshop seeks to showcase papers that disseminate novel theoretical algorithms, technical implementations, and development and validation of integrated hardware and software systems in the context of their dedicated clinical applications. The workshop attracts researchers in computer science, biomedical engineering, computer vision, robotics, and medical imaging.</p><p>The workshop was hosted as a single track, in person, event, where all accepted papers were featured as a podium presentation as part of three sessions: <i>Endoscopy Applications, AR/VR/MR Applications</i>, and <i>Surgical Data Science</i>. To foster networking and discussion, all authors","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"31-32"},"PeriodicalIF":2.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}