Healthcare Technology Letters最新文献

筛选
英文 中文
Clinical trainee performance on task-based AR/VR-guided surgical simulation is correlated with their 3D image spatial reasoning scores 临床受训人员在基于任务的 AR/VR 导向手术模拟中的表现与其三维图像空间推理得分有关
IF 2.1
Healthcare Technology Letters Pub Date : 2024-01-08 DOI: 10.1049/htl2.12066
Roy Eagleson, Denis Kikinov, Liam Bilbie, Sandrine de Ribaupierre
{"title":"Clinical trainee performance on task-based AR/VR-guided surgical simulation is correlated with their 3D image spatial reasoning scores","authors":"Roy Eagleson,&nbsp;Denis Kikinov,&nbsp;Liam Bilbie,&nbsp;Sandrine de Ribaupierre","doi":"10.1049/htl2.12066","DOIUrl":"10.1049/htl2.12066","url":null,"abstract":"<p>This paper describes a methodology for the assessment of training simulator-based computer-assisted intervention skills on an AR/VR-guided procedure making use of CT axial slice views for a neurosurgical procedure: external ventricular drain (EVD) placement. The task requires that trainees scroll through a stack of axial slices and form a mental representation of the anatomical structures in order to subsequently target the ventricles to insert an EVD. The process of observing the 2D CT image slices in order to build a mental representation of the 3D anatomical structures is the skill being taught, along with the cognitive control of the subsequent targeting, by planned motor actions, of the EVD tip to the ventricular system to drain cerebrospinal fluid (CSF). Convergence is established towards the validity of this assessment methodology by examining two objective measures of spatial reasoning, along with one subjective expert ranking methodology, and comparing these to AR/VR guidance. These measures have two components: the speed and accuracy of the targeting, which are used to derive the performance metric. Results of these correlations are presented for a population of PGY1 residents attending the Canadian Neurosurgical “Rookie Bootcamp” in 2019.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"117-125"},"PeriodicalIF":2.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139446033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autism spectrum disorder detection using facial images: A performance comparison of pretrained convolutional neural networks 利用面部图像检测自闭症谱系障碍:预训练卷积神经网络的性能比较
IF 2.8
Healthcare Technology Letters Pub Date : 2024-01-08 DOI: 10.1049/htl2.12073
Israr Ahmad, Javed Rashid, Muhammad Faheem, Arslan Akram, Nafees Ahmad Khan, Riaz ul Amin
{"title":"Autism spectrum disorder detection using facial images: A performance comparison of pretrained convolutional neural networks","authors":"Israr Ahmad,&nbsp;Javed Rashid,&nbsp;Muhammad Faheem,&nbsp;Arslan Akram,&nbsp;Nafees Ahmad Khan,&nbsp;Riaz ul Amin","doi":"10.1049/htl2.12073","DOIUrl":"10.1049/htl2.12073","url":null,"abstract":"<p>Autism spectrum disorder (ASD) is a complex psychological syndrome characterized by persistent difficulties in social interaction, restricted behaviours, speech, and nonverbal communication. The impacts of this disorder and the severity of symptoms vary from person to person. In most cases, symptoms of ASD appear at the age of 2 to 5 and continue throughout adolescence and into adulthood. While this disorder cannot be cured completely, studies have shown that early detection of this syndrome can assist in maintaining the behavioural and psychological development of children. Experts are currently studying various machine learning methods, particularly convolutional neural networks, to expedite the screening process. Convolutional neural networks are considered promising frameworks for the diagnosis of ASD. This study employs different pre-trained convolutional neural networks such as ResNet34, ResNet50, AlexNet, MobileNetV2, VGG16, and VGG19 to diagnose ASD and compared their performance. Transfer learning was applied to every model included in the study to achieve higher results than the initial models. The proposed ResNet50 model achieved the highest accuracy, 92%, compared to other transfer learning models. The proposed method also outperformed the state-of-the-art models in terms of accuracy and computational cost.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"227-239"},"PeriodicalIF":2.8,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139447473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breamy: An augmented reality mHealth prototype for surgical decision-making in breast cancer Breamy:用于乳腺癌手术决策的增强现实移动医疗原型
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-27 DOI: 10.1049/htl2.12071
Niki Najafi, Miranda Addie, Sarkis Meterissian, Marta Kersten-Oertel
{"title":"Breamy: An augmented reality mHealth prototype for surgical decision-making in breast cancer","authors":"Niki Najafi,&nbsp;Miranda Addie,&nbsp;Sarkis Meterissian,&nbsp;Marta Kersten-Oertel","doi":"10.1049/htl2.12071","DOIUrl":"https://doi.org/10.1049/htl2.12071","url":null,"abstract":"<p>Breast cancer is one of the most prevalent forms of cancer, affecting approximately one in eight women during their lifetime. Deciding on breast cancer treatment, which includes the choice between surgical options, frequently demands prompt decision-making within an 8-week timeframe. However, many women lack the necessary knowledge and preparation for making informed decisions. Anxiety and unsatisfactory outcomes can result from inadequate decision-making processes, leading to decisional regret and revision surgeries. Shared decision-making and personalized decision aids have shown positive effects on patient satisfaction and treatment outcomes. Here, Breamy, a prototype mobile health application that utilizes augmented reality technology to assist breast cancer patients in making more informed decisions is introduced. Breamy provides 3D visualizations of different surgical procedures, aiming to improve confidence in surgical decision-making, reduce decisional regret, and enhance patient well-being after surgery. To determine the perception of the usefulness of Breamy, data was collected from 166 participants through an online survey. The results suggest that Breamy has the potential to reduce patients' anxiety levels and assist them in decision-making.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"137-145"},"PeriodicalIF":2.1,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter estimation of a model describing the human fingers 人类手指描述模型的参数估计
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-26 DOI: 10.1049/htl2.12070
Panagiotis Tsakonas, Evans Neil, Joseph Hardwicke, Michael J. Chappell
{"title":"Parameter estimation of a model describing the human fingers","authors":"Panagiotis Tsakonas,&nbsp;Evans Neil,&nbsp;Joseph Hardwicke,&nbsp;Michael J. Chappell","doi":"10.1049/htl2.12070","DOIUrl":"10.1049/htl2.12070","url":null,"abstract":"<p>The goal of this paper is twofold: firstly, to provide a novel mathematical model that describes the kinematic chain of motion of the human fingers based on Lagrangian mechanics with four degrees of freedom and secondly, to estimate the model parameters using data from able-bodied individuals. In the literature there are a variety of mathematical models that have been developed to describe the motion of the human finger. These models offer little to no information on the underlying mechanisms or corresponding equations of motion. Furthermore, these models do not provide information as to how they scale with different anthropometries. The data used here is generated using an experimental procedure that considers the free response motion of each finger segment with data captured via a motion capture system. The angular data collected are then filtered and fitted to a linear second-order differential approximation of the equations of motion. The results of the study show that the free response motion of the segments is underdamped across flexion/extension and ad/abduction.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 1","pages":"1-15"},"PeriodicalIF":2.1,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139156760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizable stereo depth estimation with masked image modelling 利用遮蔽图像建模进行通用立体深度估算
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-23 DOI: 10.1049/htl2.12067
Samyakh Tukra, Haozheng Xu, Chi Xu, Stamatia Giannarou
{"title":"Generalizable stereo depth estimation with masked image modelling","authors":"Samyakh Tukra,&nbsp;Haozheng Xu,&nbsp;Chi Xu,&nbsp;Stamatia Giannarou","doi":"10.1049/htl2.12067","DOIUrl":"10.1049/htl2.12067","url":null,"abstract":"<p>Generalizable and accurate stereo depth estimation is vital for 3D reconstruction, especially in surgery. Supervised learning methods obtain best performance however, limited ground truth data for surgical scenes limits generalizability. Self-supervised methods don't need ground truth, but suffer from scale ambiguity and incorrect disparity prediction due to inconsistency of photometric loss. This work proposes a two-phase training procedure that is generalizable and retains the high performance of supervised methods. It entails: (1) performing self-supervised representation learning of left and right views via masked image modelling (MIM) to learn generalizable semantic stereo features (2) utilizing the MIM pre-trained model to learn robust depth representation via supervised learning for disparity estimation on synthetic data only. To improve stereo representations learnt via MIM, perceptual loss terms are introduced, which improve the model's stereo representations learnt by explicitly encouraging the learning of higher scene-level features. Qualitative and quantitative performance evaluation on surgical and natural scenes shows that the approach achieves sub-millimetre accuracy and lowest errors respectively, setting a new state-of-the-art. Despite not training on surgical nor natural scene data for disparity estimation.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"108-116"},"PeriodicalIF":2.1,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139162763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASSIST-U: A system for segmentation and image style transfer for ureteroscopy ASSIST-U:用于输尿管镜检查的分割和图像样式传输系统
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-18 DOI: 10.1049/htl2.12065
Daiwei Lu, Yifan Wu, Ayberk Acar, Xing Yao, Jie Ying Wu, Nicholas Kavoussi, Ipek Oguz
{"title":"ASSIST-U: A system for segmentation and image style transfer for ureteroscopy","authors":"Daiwei Lu,&nbsp;Yifan Wu,&nbsp;Ayberk Acar,&nbsp;Xing Yao,&nbsp;Jie Ying Wu,&nbsp;Nicholas Kavoussi,&nbsp;Ipek Oguz","doi":"10.1049/htl2.12065","DOIUrl":"10.1049/htl2.12065","url":null,"abstract":"<p>Kidney stones require surgical removal when they grow too large to be broken up externally or to pass on their own. Upper tract urothelial carcinoma is also sometimes treated endoscopically in a similar procedure. These surgeries are difficult, particularly for trainees who often miss tumours, stones or stone fragments, requiring re-operation. Furthermore, there are no patient-specific simulators to facilitate training or standardized visualization tools for ureteroscopy despite its high prevalence. Here a system ASSIST-U is proposed to create realistic ureteroscopy images and videos solely using preoperative computerized tomography (CT) images to address these unmet needs. A 3D UNet model is trained to automatically segment CT images and construct 3D surfaces. These surfaces are then skeletonized for rendering. Finally, a style transfer model is trained using contrastive unpaired translation (CUT) to synthesize realistic ureteroscopy images. Cross validation on the CT segmentation model achieved a Dice score of 0.853 <span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math> 0.084. CUT style transfer produced visually plausible images; the kernel inception distance to real ureteroscopy images was reduced from 0.198 (rendered) to 0.089 (synthesized). The entire pipeline from CT to synthesized ureteroscopy is also qualitatively demonstrated. The proposed ASSIST-U system shows promise for aiding surgeons in the visualization of kidney ureteroscopy.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"40-47"},"PeriodicalIF":2.1,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139174225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scale-preserving shape reconstruction from monocular endoscope image sequences by supervised depth learning 通过有监督深度学习从单目内窥镜图像序列中重建保尺度形状
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-15 DOI: 10.1049/htl2.12064
Takeshi Masuda, Ryusuke Sagawa, Ryo Furukawa, Hiroshi Kawasaki
{"title":"Scale-preserving shape reconstruction from monocular endoscope image sequences by supervised depth learning","authors":"Takeshi Masuda,&nbsp;Ryusuke Sagawa,&nbsp;Ryo Furukawa,&nbsp;Hiroshi Kawasaki","doi":"10.1049/htl2.12064","DOIUrl":"10.1049/htl2.12064","url":null,"abstract":"<p>Reconstructing 3D shapes from images are becoming popular, but such methods usually estimate relative depth maps with ambiguous scales. A method for reconstructing a scale-preserving 3D shape from monocular endoscope image sequences through training an absolute depth prediction network is proposed. First, a dataset of synchronized sequences of RGB images and depth maps is created using an endoscope simulator. Then, a supervised depth prediction network is trained that estimates a depth map from a RGB image minimizing the loss compared to the ground-truth depth map. The predicted depth map sequence is aligned to reconstruct a 3D shape. Finally, the proposed method is applied to a real endoscope image sequence.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"76-84"},"PeriodicalIF":2.1,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138997035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative gaze guidance with mixed reality 利用混合现实技术进行术中凝视引导
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-13 DOI: 10.1049/htl2.12061
Ayberk Acar, Jumanh Atoum, Amy Reed, Yizhou Li, Nicholas Kavoussi, Jie Ying Wu
{"title":"Intraoperative gaze guidance with mixed reality","authors":"Ayberk Acar,&nbsp;Jumanh Atoum,&nbsp;Amy Reed,&nbsp;Yizhou Li,&nbsp;Nicholas Kavoussi,&nbsp;Jie Ying Wu","doi":"10.1049/htl2.12061","DOIUrl":"10.1049/htl2.12061","url":null,"abstract":"<p>Efficient communication and collaboration are essential in the operating room for successful and safe surgery. While many technologies are improving various aspects of surgery, communication between attending surgeons, residents, and surgical teams is still limited to verbal interactions that are prone to misunderstandings. Novel modes of communication can increase speed and accuracy, and transform operating rooms. A mixed reality (MR) based gaze sharing application on Microsoft HoloLens 2 headset that can help expert surgeons indicate specific regions, communicate with decreased verbal effort, and guide novices throughout an operation is presented. The utility of the application is tested with a user study of endoscopic kidney stone localization completed by urology experts and novice surgeons. Improvement is observed in the NASA task load index surveys (up to 25.23%), in the success rate of the task (6.98% increase in localized stone percentage), and in gaze analyses (up to 31.99%). The proposed application shows promise in both operating room applications and surgical training tasks.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"85-92"},"PeriodicalIF":2.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards navigation in endoscopic kidney surgery based on preoperative imaging 基于术前成像的内镜肾脏手术导航
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-13 DOI: 10.1049/htl2.12059
Ayberk Acar, Daiwei Lu, Yifan Wu, Ipek Oguz, Nicholas Kavoussi, Jie Ying Wu
{"title":"Towards navigation in endoscopic kidney surgery based on preoperative imaging","authors":"Ayberk Acar,&nbsp;Daiwei Lu,&nbsp;Yifan Wu,&nbsp;Ipek Oguz,&nbsp;Nicholas Kavoussi,&nbsp;Jie Ying Wu","doi":"10.1049/htl2.12059","DOIUrl":"10.1049/htl2.12059","url":null,"abstract":"<p>Endoscopic renal surgeries have high re-operation rates, particularly for lower volume surgeons. Due to the limited field and depth of view of current endoscopes, mentally mapping preoperative computed tomography (CT) images of patient anatomy to the surgical field is challenging. The inability to completely navigate the intrarenal collecting system leads to missed kidney stones and tumors, subsequently raising recurrence rates. A guidance system is proposed to estimate the endoscope positions within the CT to reduce re-operation rates. A Structure from Motion algorithm is used to reconstruct the kidney collecting system from the endoscope videos. In addition, the kidney collecting system is segmented from CT scans using 3D U-Net to create a 3D model. The two collecting system representations can then be registered to provide information on the relative endoscope position. Correct reconstruction and localization of intrarenal anatomy and endoscope position is demonstrated. Furthermore, a 3D map is created supported by the RGB endoscope images to reduce the burden of mental mapping during surgery. The proposed reconstruction pipeline has been validated for guidance. It can reduce the mental burden for surgeons and is a step towards the long-term goal of reducing re-operation rates in kidney stone surgery.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"67-75"},"PeriodicalIF":2.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movement examination of the lumbar spine using a developed wearable motion sensor 使用开发的可穿戴运动传感器进行腰椎运动检查
IF 2.1
Healthcare Technology Letters Pub Date : 2023-12-09 DOI: 10.1049/htl2.12063
Reza Abbasi-Kesbi, Mohammad Fathi, Seyed Zaniyar Sajadi
{"title":"Movement examination of the lumbar spine using a developed wearable motion sensor","authors":"Reza Abbasi-Kesbi,&nbsp;Mohammad Fathi,&nbsp;Seyed Zaniyar Sajadi","doi":"10.1049/htl2.12063","DOIUrl":"10.1049/htl2.12063","url":null,"abstract":"<p>A system for monitoring spinal movements based on wearable motion sensors is proposed here. For this purpose, a hardware system is first developed that measures data of linear acceleration, angular velocity, and the magnetic field of the spine. Then, the obtained data from these sensors are combined in a proposed complementary filter, and their angular variations are estimated. The obtained results of angular variation of this system in comparison with an accurate reference illustrate that the root mean squared error is less than 1.61 degrees for three angles of <math>\u0000 <semantics>\u0000 <msub>\u0000 <mi>ϕ</mi>\u0000 <mi>r</mi>\u0000 </msub>\u0000 <annotation>$phi _r$</annotation>\u0000 </semantics></math>, <math>\u0000 <semantics>\u0000 <msub>\u0000 <mi>θ</mi>\u0000 <mi>r</mi>\u0000 </msub>\u0000 <annotation>$theta _r$</annotation>\u0000 </semantics></math> and <math>\u0000 <semantics>\u0000 <msub>\u0000 <mi>ψ</mi>\u0000 <mi>r</mi>\u0000 </msub>\u0000 <annotation>$psi _r$</annotation>\u0000 </semantics></math> for this system that proves this system can accurately estimate the angular variation of the spine. Then, the system is mounted on the lumbar spine of several volunteers, and the obtained angles from the patients' spine are compared with some healthy volunteers' spine, and the performance of their spine improves over time. The results show that this system can be very effective for patients who suffer from back problems and help in their recovery process a lot.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"10 6","pages":"122-132"},"PeriodicalIF":2.1,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138585476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信