{"title":"特邀社论:医疗保健中的大数据和人工智能。","authors":"Tim Hulsen, Francesca Manni","doi":"10.1049/htl2.12086","DOIUrl":null,"url":null,"abstract":"<p>Big data refers to large datasets that can be mined and analysed using data science, statistics or machine learning (ML), often without defining a hypothesis upfront [<span>1</span>]. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, which can use these big data to find patterns, to make predictions and even to generate new data or information [<span>2</span>]. Big data has been used to improve healthcare [<span>3</span>] and medicine [<span>1</span>] already for many years, by enabling researchers and medical professionals to draw conclusions from large and rich datasets rather than from clinical trials based on a small number of patients. More recently, AI has been used in healthcare as well, for example by finding and classifying tumours in magnetic resonance images (MRI) [<span>4</span>] or by improving and automating the clinical workflow [<span>5</span>]. This uptake of AI in healthcare is still increasing, as new models and techniques are being introduced. For example, the creation of large language models (LLMs) such as ChatGPT enables the use of generative AI (GenAI) in healthcare [<span>6</span>]. GenAI can be used to create synthetic data (where the original data has privacy issues), generate radiology or pathology reports, or create chatbots to interact with the patient. The expectation is that the application of AI in healthcare will get even more important, as hospitals are suffering from personnel shortages and increasing numbers of elderly people needing care. The rise of AI in healthcare also comes with some challenges. Especially in healthcare, we want to know what the AI algorithm is doing; it should not be a ‘black box’. Explainable AI (XAI) can help the medical professional (or even the patient) to understand why the AI algorithm makes a certain decision, increasing trust in the result or prediction [<span>7</span>]. It is also important that AI works according to privacy laws, is free from bias, and does not produce toxic language (in case of a medical chatbot). Responsible AI (RAI) tries to prevent these issues by providing a framework of ethical principles [<span>8</span>]. By embracing the (current and future) technical possibilities AI has to offer, and at the same time making sure that AI is explainable and responsible, we can make sure that hospitals are able to withstand any future challenges.</p><p>This Special Issue contains six papers, all of which underwent peer review. One paper is about increasing the transparency of machine learning models, one is about cardiac disease risk prediction, and another one is about depression detection in Roman Urdu social media posts. The other papers are about autism spectrum disorder detection using facial images, hybrid brain tumour classification of histopathology hyperspectral images, and prediction of the utilization of invasive and non-invasive ventilation throughout the intensive care unit (ICU) duration.</p><p>Lisboa discusses in ‘Open your black box classifier’ [<span>9</span>] that the transparency of machine learning (ML) models is central to good practice when they are applied in high-risk applications. Recent developments make this feasible for tabular data (Excel, CSV etc.), which is prevalent in risk modelling and computer-based decision support across multiple domains including healthcare. The author outlines important motivating factors for interpretability and summarizes practical approaches, pointing out the main methods available. The main finding is that any black box classifier making probabilistic predictions of class membership from data in tabular form can be represented with a globally interpretable model without performance loss.</p><p>In ‘Cardiac Disease Risk Prediction using Machine Learning Algorithms’ [<span>10</span>], Stonier et al. try to create a ML system that is used for predicting whether a patient is likely to develop heart attacks, by analyzing various data sources including electronic health records (EHR) and clinical diagnosis reports from hospital clinics. Various algorithms such as RF, regression models, K-nearest neighbour (KNN), Naïve Bayes algorithm etc., are compared. Their RF algorithm provides a high accuracy (88.52%) in forecasting heart attack risk, which could herald a revolution in the diagnosis and treatment of cardiovascular illnesses.</p><p>Rehmani et al. argue in ‘Depression Detection with Machine Learning of Structural and Non-Structural Dual Languages’ [<span>11</span>] that depression is a painful and serious mental state, which has an adversarial impact on human thoughts, feeling, and actions. Their study aims to create a dataset of social media posts in the Roman Urdu language, to predict the risk of depression in Roman Urdu as well as English. For Roman Urdu, English language data has been obtained from Facebook, which was manually converted into Roman Urdu. English comments were obtained from Kaggle. Machine learning models, including Support Vector Machine (SVM), Support Vector Machine Radial Basis Function (SVM RBF), Random Forest (RF), and BERT, were investigated. The risk of depression was classified into three categories: not depressed, moderate depression, and severe depression. Out of these four models, SVM achieved the best result with an accuracy of 84%. Their work refines the area of depression prediction, particularly in Asian countries.</p><p>‘Autism Spectrum Disorder Detection using Facial Images: A Performance Comparison of Pretrained Convolutional Neural Networks’ [<span>12</span>] by Ahmad et al. discusses that studies have shown that early detection of ASD can assist in maintaining the behavioural and psychological development of children. Experts are currently studying various ML methods, particularly CNNs, to expedite the screening process. CNNs are considered promising frameworks for the diagnosis of ASD. Different pre-trained CNNs such as ResNet34, ResNet50, AlexNet, MobileNetV2, VGG16, and VGG19 were employed to diagnose ASD, and their performance was compared. The authors applied transfer learning to every model included in the study to achieve higher results than the initial models. The proposed ResNet50 model achieved the highest accuracy of 92%. The proposed method also outperformed the state-of-the-art models in terms of accuracy and computational cost.</p><p>Cruz-Guerrero et al. discuss in ‘Hybrid Brain Tumor Classification of Histopathology Hyperspectral Images by Linear Unmixing and an Ensemble of Deep Neural Networks’ [<span>13</span>] that hyperspectral imaging (HSI) has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. The authors propose a hybrid scheme to classify non-tumour and tumour histological brain samples by HSI. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Therefore, their work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.</p><p>Finally, in ‘Machine learning modeling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration’ [<span>14</span>], Schwager et al. present a machine learning model to predict the need for both invasive and non-invasive mechanical ventilation in ICU patients. Using the Philips eICU Research Institute (ERI) database, 2.6 million ICU patient data from 2010 to 2019 were analyzed. Additionally, an external test set from a single hospital from this database was used to assess the model's generalizability. Model performance was determined by comparing the model probability predictions with the actual incidence of ventilation use, either invasive or non-invasive. The model demonstrated a prediction performance with an AUC of 0.921 for overall ventilation, 0.937 for invasive, and 0.827 for non-invasive. Factors such as high Glasgow Coma Scores, younger age, lower body mass index (BMI), and lower partial pressure of carbon dioxide (PaCO2) were highlighted as indicators of a lower likelihood for the need for ventilation. The model can serve as a retrospective benchmarking tool for hospitals to assess ICU performance concerning mechanical ventilation necessity. It also enables analysis of ventilation strategy trends and risk-adjusted comparisons, with potential for future testing as a clinical decision tool for optimizing ICU ventilation management.</p><p></p><p><b>Tim Hulsen</b> is a Senior Data & AI Scientist with a broad experience in both academia and industry, working on a wide range of projects, mostly in oncology. After receiving his MSc in biology in 2001, he obtained a PhD in bioinformatics in 2007 from a collaboration between the Radboud University Nijmegen and the pharma company N.V. Organon. After 2 years post-doc at the Radboud University Nijmegen, he moved to Philips Research in 2009, where he worked on biomarker discovery for 1 year, before moving to the data management and data science field, working on big data projects in oncology, such as Prostate Cancer Molecular Medicine (PCMM), Translational Research IT (TraIT), Movember Global Action Plan 3 (GAP3), the European Randomized Study of Screening for Prostate Cancer (ERSPC), and Liquid Biopsies and Imaging (LIMA). His most recent projects are ReIMAGINE, which is about the use of imaging to prevent unnecessary biopsies in prostate cancer, and SMART-BEAR, which is about the development of an innovative platform to support the healthy and independent living of elderly people. He is the author of several publications around big data, data management, data science, and artificial intelligence in the context of healthcare and medicine.</p><p></p><p><b>Francesca Manni</b> is currently a Clinical Scientist at Philips and guest researcher at Eindhoven University of Technology. Francesca has a background in biomedical engineering and computer vision with a focus on AI for medical imaging, by finalizing a PhD at Eindhoven University of Technology, where she has worked in close collaboration with leading EU hospitals. She focused on the development and application of novel imaging/sensing technologies such as hyperspectral imaging, to build specific solutions for tumour detection and minimally invasive surgery. Her dissertation work resulted in applying novel algorithms for patient tracking during spinal surgery and cancer detection. After that, she was AI & Data Scientist at Philips Research from 2021 to 2023. She has worked within the AI for vision field, enabling AI solutions in healthcare as well as for the deployment of AI algorithms in many European hospitals. During this period, she has led the Healthcare group at the Big Data Value Association (BDVA). Francesca's research is reported in numerous international peer-reviewed scientific journals and top international conference proceedings in the field of computer vision, image-guided interventions, and AI privacy preserving techniques.</p><p><b>Tim Hulsen</b>: Writing—original draft; writing—review and editing. <b>Francesca Manni</b>: Writing—review and editing.</p><p>The authors declare no conflict of interest.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 4","pages":"207-209"},"PeriodicalIF":2.8000,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294927/pdf/","citationCount":"0","resultStr":"{\"title\":\"Guest Editorial: Big data and artificial intelligence in healthcare\",\"authors\":\"Tim Hulsen, Francesca Manni\",\"doi\":\"10.1049/htl2.12086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Big data refers to large datasets that can be mined and analysed using data science, statistics or machine learning (ML), often without defining a hypothesis upfront [<span>1</span>]. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, which can use these big data to find patterns, to make predictions and even to generate new data or information [<span>2</span>]. Big data has been used to improve healthcare [<span>3</span>] and medicine [<span>1</span>] already for many years, by enabling researchers and medical professionals to draw conclusions from large and rich datasets rather than from clinical trials based on a small number of patients. More recently, AI has been used in healthcare as well, for example by finding and classifying tumours in magnetic resonance images (MRI) [<span>4</span>] or by improving and automating the clinical workflow [<span>5</span>]. This uptake of AI in healthcare is still increasing, as new models and techniques are being introduced. For example, the creation of large language models (LLMs) such as ChatGPT enables the use of generative AI (GenAI) in healthcare [<span>6</span>]. GenAI can be used to create synthetic data (where the original data has privacy issues), generate radiology or pathology reports, or create chatbots to interact with the patient. The expectation is that the application of AI in healthcare will get even more important, as hospitals are suffering from personnel shortages and increasing numbers of elderly people needing care. The rise of AI in healthcare also comes with some challenges. Especially in healthcare, we want to know what the AI algorithm is doing; it should not be a ‘black box’. Explainable AI (XAI) can help the medical professional (or even the patient) to understand why the AI algorithm makes a certain decision, increasing trust in the result or prediction [<span>7</span>]. It is also important that AI works according to privacy laws, is free from bias, and does not produce toxic language (in case of a medical chatbot). Responsible AI (RAI) tries to prevent these issues by providing a framework of ethical principles [<span>8</span>]. By embracing the (current and future) technical possibilities AI has to offer, and at the same time making sure that AI is explainable and responsible, we can make sure that hospitals are able to withstand any future challenges.</p><p>This Special Issue contains six papers, all of which underwent peer review. One paper is about increasing the transparency of machine learning models, one is about cardiac disease risk prediction, and another one is about depression detection in Roman Urdu social media posts. The other papers are about autism spectrum disorder detection using facial images, hybrid brain tumour classification of histopathology hyperspectral images, and prediction of the utilization of invasive and non-invasive ventilation throughout the intensive care unit (ICU) duration.</p><p>Lisboa discusses in ‘Open your black box classifier’ [<span>9</span>] that the transparency of machine learning (ML) models is central to good practice when they are applied in high-risk applications. Recent developments make this feasible for tabular data (Excel, CSV etc.), which is prevalent in risk modelling and computer-based decision support across multiple domains including healthcare. The author outlines important motivating factors for interpretability and summarizes practical approaches, pointing out the main methods available. The main finding is that any black box classifier making probabilistic predictions of class membership from data in tabular form can be represented with a globally interpretable model without performance loss.</p><p>In ‘Cardiac Disease Risk Prediction using Machine Learning Algorithms’ [<span>10</span>], Stonier et al. try to create a ML system that is used for predicting whether a patient is likely to develop heart attacks, by analyzing various data sources including electronic health records (EHR) and clinical diagnosis reports from hospital clinics. Various algorithms such as RF, regression models, K-nearest neighbour (KNN), Naïve Bayes algorithm etc., are compared. Their RF algorithm provides a high accuracy (88.52%) in forecasting heart attack risk, which could herald a revolution in the diagnosis and treatment of cardiovascular illnesses.</p><p>Rehmani et al. argue in ‘Depression Detection with Machine Learning of Structural and Non-Structural Dual Languages’ [<span>11</span>] that depression is a painful and serious mental state, which has an adversarial impact on human thoughts, feeling, and actions. Their study aims to create a dataset of social media posts in the Roman Urdu language, to predict the risk of depression in Roman Urdu as well as English. For Roman Urdu, English language data has been obtained from Facebook, which was manually converted into Roman Urdu. English comments were obtained from Kaggle. Machine learning models, including Support Vector Machine (SVM), Support Vector Machine Radial Basis Function (SVM RBF), Random Forest (RF), and BERT, were investigated. The risk of depression was classified into three categories: not depressed, moderate depression, and severe depression. Out of these four models, SVM achieved the best result with an accuracy of 84%. Their work refines the area of depression prediction, particularly in Asian countries.</p><p>‘Autism Spectrum Disorder Detection using Facial Images: A Performance Comparison of Pretrained Convolutional Neural Networks’ [<span>12</span>] by Ahmad et al. discusses that studies have shown that early detection of ASD can assist in maintaining the behavioural and psychological development of children. Experts are currently studying various ML methods, particularly CNNs, to expedite the screening process. CNNs are considered promising frameworks for the diagnosis of ASD. Different pre-trained CNNs such as ResNet34, ResNet50, AlexNet, MobileNetV2, VGG16, and VGG19 were employed to diagnose ASD, and their performance was compared. The authors applied transfer learning to every model included in the study to achieve higher results than the initial models. The proposed ResNet50 model achieved the highest accuracy of 92%. The proposed method also outperformed the state-of-the-art models in terms of accuracy and computational cost.</p><p>Cruz-Guerrero et al. discuss in ‘Hybrid Brain Tumor Classification of Histopathology Hyperspectral Images by Linear Unmixing and an Ensemble of Deep Neural Networks’ [<span>13</span>] that hyperspectral imaging (HSI) has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. The authors propose a hybrid scheme to classify non-tumour and tumour histological brain samples by HSI. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Therefore, their work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.</p><p>Finally, in ‘Machine learning modeling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration’ [<span>14</span>], Schwager et al. present a machine learning model to predict the need for both invasive and non-invasive mechanical ventilation in ICU patients. Using the Philips eICU Research Institute (ERI) database, 2.6 million ICU patient data from 2010 to 2019 were analyzed. Additionally, an external test set from a single hospital from this database was used to assess the model's generalizability. Model performance was determined by comparing the model probability predictions with the actual incidence of ventilation use, either invasive or non-invasive. The model demonstrated a prediction performance with an AUC of 0.921 for overall ventilation, 0.937 for invasive, and 0.827 for non-invasive. Factors such as high Glasgow Coma Scores, younger age, lower body mass index (BMI), and lower partial pressure of carbon dioxide (PaCO2) were highlighted as indicators of a lower likelihood for the need for ventilation. The model can serve as a retrospective benchmarking tool for hospitals to assess ICU performance concerning mechanical ventilation necessity. It also enables analysis of ventilation strategy trends and risk-adjusted comparisons, with potential for future testing as a clinical decision tool for optimizing ICU ventilation management.</p><p></p><p><b>Tim Hulsen</b> is a Senior Data & AI Scientist with a broad experience in both academia and industry, working on a wide range of projects, mostly in oncology. After receiving his MSc in biology in 2001, he obtained a PhD in bioinformatics in 2007 from a collaboration between the Radboud University Nijmegen and the pharma company N.V. Organon. After 2 years post-doc at the Radboud University Nijmegen, he moved to Philips Research in 2009, where he worked on biomarker discovery for 1 year, before moving to the data management and data science field, working on big data projects in oncology, such as Prostate Cancer Molecular Medicine (PCMM), Translational Research IT (TraIT), Movember Global Action Plan 3 (GAP3), the European Randomized Study of Screening for Prostate Cancer (ERSPC), and Liquid Biopsies and Imaging (LIMA). His most recent projects are ReIMAGINE, which is about the use of imaging to prevent unnecessary biopsies in prostate cancer, and SMART-BEAR, which is about the development of an innovative platform to support the healthy and independent living of elderly people. He is the author of several publications around big data, data management, data science, and artificial intelligence in the context of healthcare and medicine.</p><p></p><p><b>Francesca Manni</b> is currently a Clinical Scientist at Philips and guest researcher at Eindhoven University of Technology. Francesca has a background in biomedical engineering and computer vision with a focus on AI for medical imaging, by finalizing a PhD at Eindhoven University of Technology, where she has worked in close collaboration with leading EU hospitals. She focused on the development and application of novel imaging/sensing technologies such as hyperspectral imaging, to build specific solutions for tumour detection and minimally invasive surgery. Her dissertation work resulted in applying novel algorithms for patient tracking during spinal surgery and cancer detection. After that, she was AI & Data Scientist at Philips Research from 2021 to 2023. She has worked within the AI for vision field, enabling AI solutions in healthcare as well as for the deployment of AI algorithms in many European hospitals. During this period, she has led the Healthcare group at the Big Data Value Association (BDVA). Francesca's research is reported in numerous international peer-reviewed scientific journals and top international conference proceedings in the field of computer vision, image-guided interventions, and AI privacy preserving techniques.</p><p><b>Tim Hulsen</b>: Writing—original draft; writing—review and editing. <b>Francesca Manni</b>: Writing—review and editing.</p><p>The authors declare no conflict of interest.</p>\",\"PeriodicalId\":37474,\"journal\":{\"name\":\"Healthcare Technology Letters\",\"volume\":\"11 4\",\"pages\":\"207-209\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294927/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Healthcare Technology Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/htl2.12086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/htl2.12086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Guest Editorial: Big data and artificial intelligence in healthcare
Big data refers to large datasets that can be mined and analysed using data science, statistics or machine learning (ML), often without defining a hypothesis upfront [1]. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, which can use these big data to find patterns, to make predictions and even to generate new data or information [2]. Big data has been used to improve healthcare [3] and medicine [1] already for many years, by enabling researchers and medical professionals to draw conclusions from large and rich datasets rather than from clinical trials based on a small number of patients. More recently, AI has been used in healthcare as well, for example by finding and classifying tumours in magnetic resonance images (MRI) [4] or by improving and automating the clinical workflow [5]. This uptake of AI in healthcare is still increasing, as new models and techniques are being introduced. For example, the creation of large language models (LLMs) such as ChatGPT enables the use of generative AI (GenAI) in healthcare [6]. GenAI can be used to create synthetic data (where the original data has privacy issues), generate radiology or pathology reports, or create chatbots to interact with the patient. The expectation is that the application of AI in healthcare will get even more important, as hospitals are suffering from personnel shortages and increasing numbers of elderly people needing care. The rise of AI in healthcare also comes with some challenges. Especially in healthcare, we want to know what the AI algorithm is doing; it should not be a ‘black box’. Explainable AI (XAI) can help the medical professional (or even the patient) to understand why the AI algorithm makes a certain decision, increasing trust in the result or prediction [7]. It is also important that AI works according to privacy laws, is free from bias, and does not produce toxic language (in case of a medical chatbot). Responsible AI (RAI) tries to prevent these issues by providing a framework of ethical principles [8]. By embracing the (current and future) technical possibilities AI has to offer, and at the same time making sure that AI is explainable and responsible, we can make sure that hospitals are able to withstand any future challenges.
This Special Issue contains six papers, all of which underwent peer review. One paper is about increasing the transparency of machine learning models, one is about cardiac disease risk prediction, and another one is about depression detection in Roman Urdu social media posts. The other papers are about autism spectrum disorder detection using facial images, hybrid brain tumour classification of histopathology hyperspectral images, and prediction of the utilization of invasive and non-invasive ventilation throughout the intensive care unit (ICU) duration.
Lisboa discusses in ‘Open your black box classifier’ [9] that the transparency of machine learning (ML) models is central to good practice when they are applied in high-risk applications. Recent developments make this feasible for tabular data (Excel, CSV etc.), which is prevalent in risk modelling and computer-based decision support across multiple domains including healthcare. The author outlines important motivating factors for interpretability and summarizes practical approaches, pointing out the main methods available. The main finding is that any black box classifier making probabilistic predictions of class membership from data in tabular form can be represented with a globally interpretable model without performance loss.
In ‘Cardiac Disease Risk Prediction using Machine Learning Algorithms’ [10], Stonier et al. try to create a ML system that is used for predicting whether a patient is likely to develop heart attacks, by analyzing various data sources including electronic health records (EHR) and clinical diagnosis reports from hospital clinics. Various algorithms such as RF, regression models, K-nearest neighbour (KNN), Naïve Bayes algorithm etc., are compared. Their RF algorithm provides a high accuracy (88.52%) in forecasting heart attack risk, which could herald a revolution in the diagnosis and treatment of cardiovascular illnesses.
Rehmani et al. argue in ‘Depression Detection with Machine Learning of Structural and Non-Structural Dual Languages’ [11] that depression is a painful and serious mental state, which has an adversarial impact on human thoughts, feeling, and actions. Their study aims to create a dataset of social media posts in the Roman Urdu language, to predict the risk of depression in Roman Urdu as well as English. For Roman Urdu, English language data has been obtained from Facebook, which was manually converted into Roman Urdu. English comments were obtained from Kaggle. Machine learning models, including Support Vector Machine (SVM), Support Vector Machine Radial Basis Function (SVM RBF), Random Forest (RF), and BERT, were investigated. The risk of depression was classified into three categories: not depressed, moderate depression, and severe depression. Out of these four models, SVM achieved the best result with an accuracy of 84%. Their work refines the area of depression prediction, particularly in Asian countries.
‘Autism Spectrum Disorder Detection using Facial Images: A Performance Comparison of Pretrained Convolutional Neural Networks’ [12] by Ahmad et al. discusses that studies have shown that early detection of ASD can assist in maintaining the behavioural and psychological development of children. Experts are currently studying various ML methods, particularly CNNs, to expedite the screening process. CNNs are considered promising frameworks for the diagnosis of ASD. Different pre-trained CNNs such as ResNet34, ResNet50, AlexNet, MobileNetV2, VGG16, and VGG19 were employed to diagnose ASD, and their performance was compared. The authors applied transfer learning to every model included in the study to achieve higher results than the initial models. The proposed ResNet50 model achieved the highest accuracy of 92%. The proposed method also outperformed the state-of-the-art models in terms of accuracy and computational cost.
Cruz-Guerrero et al. discuss in ‘Hybrid Brain Tumor Classification of Histopathology Hyperspectral Images by Linear Unmixing and an Ensemble of Deep Neural Networks’ [13] that hyperspectral imaging (HSI) has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. The authors propose a hybrid scheme to classify non-tumour and tumour histological brain samples by HSI. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Therefore, their work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.
Finally, in ‘Machine learning modeling for predicting the utilization of invasive and non-invasive ventilation throughout the ICU duration’ [14], Schwager et al. present a machine learning model to predict the need for both invasive and non-invasive mechanical ventilation in ICU patients. Using the Philips eICU Research Institute (ERI) database, 2.6 million ICU patient data from 2010 to 2019 were analyzed. Additionally, an external test set from a single hospital from this database was used to assess the model's generalizability. Model performance was determined by comparing the model probability predictions with the actual incidence of ventilation use, either invasive or non-invasive. The model demonstrated a prediction performance with an AUC of 0.921 for overall ventilation, 0.937 for invasive, and 0.827 for non-invasive. Factors such as high Glasgow Coma Scores, younger age, lower body mass index (BMI), and lower partial pressure of carbon dioxide (PaCO2) were highlighted as indicators of a lower likelihood for the need for ventilation. The model can serve as a retrospective benchmarking tool for hospitals to assess ICU performance concerning mechanical ventilation necessity. It also enables analysis of ventilation strategy trends and risk-adjusted comparisons, with potential for future testing as a clinical decision tool for optimizing ICU ventilation management.
Tim Hulsen is a Senior Data & AI Scientist with a broad experience in both academia and industry, working on a wide range of projects, mostly in oncology. After receiving his MSc in biology in 2001, he obtained a PhD in bioinformatics in 2007 from a collaboration between the Radboud University Nijmegen and the pharma company N.V. Organon. After 2 years post-doc at the Radboud University Nijmegen, he moved to Philips Research in 2009, where he worked on biomarker discovery for 1 year, before moving to the data management and data science field, working on big data projects in oncology, such as Prostate Cancer Molecular Medicine (PCMM), Translational Research IT (TraIT), Movember Global Action Plan 3 (GAP3), the European Randomized Study of Screening for Prostate Cancer (ERSPC), and Liquid Biopsies and Imaging (LIMA). His most recent projects are ReIMAGINE, which is about the use of imaging to prevent unnecessary biopsies in prostate cancer, and SMART-BEAR, which is about the development of an innovative platform to support the healthy and independent living of elderly people. He is the author of several publications around big data, data management, data science, and artificial intelligence in the context of healthcare and medicine.
Francesca Manni is currently a Clinical Scientist at Philips and guest researcher at Eindhoven University of Technology. Francesca has a background in biomedical engineering and computer vision with a focus on AI for medical imaging, by finalizing a PhD at Eindhoven University of Technology, where she has worked in close collaboration with leading EU hospitals. She focused on the development and application of novel imaging/sensing technologies such as hyperspectral imaging, to build specific solutions for tumour detection and minimally invasive surgery. Her dissertation work resulted in applying novel algorithms for patient tracking during spinal surgery and cancer detection. After that, she was AI & Data Scientist at Philips Research from 2021 to 2023. She has worked within the AI for vision field, enabling AI solutions in healthcare as well as for the deployment of AI algorithms in many European hospitals. During this period, she has led the Healthcare group at the Big Data Value Association (BDVA). Francesca's research is reported in numerous international peer-reviewed scientific journals and top international conference proceedings in the field of computer vision, image-guided interventions, and AI privacy preserving techniques.
Tim Hulsen: Writing—original draft; writing—review and editing. Francesca Manni: Writing—review and editing.
期刊介绍:
Healthcare Technology Letters aims to bring together an audience of biomedical and electrical engineers, physical and computer scientists, and mathematicians to enable the exchange of the latest ideas and advances through rapid online publication of original healthcare technology research. Major themes of the journal include (but are not limited to): Major technological/methodological areas: Biomedical signal processing Biomedical imaging and image processing Bioinstrumentation (sensors, wearable technologies, etc) Biomedical informatics Major application areas: Cardiovascular and respiratory systems engineering Neural engineering, neuromuscular systems Rehabilitation engineering Bio-robotics, surgical planning and biomechanics Therapeutic and diagnostic systems, devices and technologies Clinical engineering Healthcare information systems, telemedicine, mHealth.