{"title":"基于深度学习和心电信号与临床数据融合的智能产前胎儿监测。","authors":"Zhen Cao, Guoqiang Wang, Ling Xu, Chaowei Li, Yuexing Hao, Qinqun Chen, Xia Li, Guiqing Liu, Hang Wei","doi":"10.1007/s13755-023-00219-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Cardiotocography (CTG), which measures uterine contraction (UC) and fetal heart rate (FHR), is a crucial tool for assessing fetal health during pregnancy. However, traditional computerized cardiotocography (cCTG) approaches have non-negligible calibration errors in feature extraction and heavily rely on the expertise and prior experience to define diagnostic features from CTG or FHR signals. Although previous works have studied deep learning methods for extracting CTG or FHR features, these methods still neglect the clinical information of pregnant women.</p><p><strong>Methods: </strong>In this paper, we proposed a multimodal deep learning architecture (MMDLA) for intelligent antepartum fetal monitoring that is capable of performing automatic CTG feature extraction, fusion with clinical data and classification. The multimodal feature fusion was achieved by concatenating high-level CTG features, which were extracted from preprocessed CTG signals via a convolution neural network (CNN) with six convolution layers and five fully connected layers, and the clinical data of pregnant women. Eventually, light gradient boosting machine (LGBM) was implemented as fetal status assessment classifier. The effectiveness of MMDLA was evaluated using a dataset of 16,355 cases, each of which includes FHR signal, UC signal and pertinent clinical data like maternal age and gestational age.</p><p><strong>Results: </strong>With an accuracy of 90.77% and an area under the curve (AUC) value of 0.9201, the multimodal features performed admirably. The data imbalance issue was also effectively resolved by the LGBM classifier, with a normal-F1 value of 0.9376 and an abnormal-F1 value of 0.8223.</p><p><strong>Conclusion: </strong>In summary, the proposed MMDLA is conducive to the realization of intelligent antepartum fetal monitoring.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"11 1","pages":"16"},"PeriodicalIF":4.7000,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10025176/pdf/","citationCount":"0","resultStr":"{\"title\":\"Intelligent antepartum fetal monitoring via deep learning and fusion of cardiotocographic signals and clinical data.\",\"authors\":\"Zhen Cao, Guoqiang Wang, Ling Xu, Chaowei Li, Yuexing Hao, Qinqun Chen, Xia Li, Guiqing Liu, Hang Wei\",\"doi\":\"10.1007/s13755-023-00219-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Cardiotocography (CTG), which measures uterine contraction (UC) and fetal heart rate (FHR), is a crucial tool for assessing fetal health during pregnancy. However, traditional computerized cardiotocography (cCTG) approaches have non-negligible calibration errors in feature extraction and heavily rely on the expertise and prior experience to define diagnostic features from CTG or FHR signals. Although previous works have studied deep learning methods for extracting CTG or FHR features, these methods still neglect the clinical information of pregnant women.</p><p><strong>Methods: </strong>In this paper, we proposed a multimodal deep learning architecture (MMDLA) for intelligent antepartum fetal monitoring that is capable of performing automatic CTG feature extraction, fusion with clinical data and classification. The multimodal feature fusion was achieved by concatenating high-level CTG features, which were extracted from preprocessed CTG signals via a convolution neural network (CNN) with six convolution layers and five fully connected layers, and the clinical data of pregnant women. Eventually, light gradient boosting machine (LGBM) was implemented as fetal status assessment classifier. The effectiveness of MMDLA was evaluated using a dataset of 16,355 cases, each of which includes FHR signal, UC signal and pertinent clinical data like maternal age and gestational age.</p><p><strong>Results: </strong>With an accuracy of 90.77% and an area under the curve (AUC) value of 0.9201, the multimodal features performed admirably. The data imbalance issue was also effectively resolved by the LGBM classifier, with a normal-F1 value of 0.9376 and an abnormal-F1 value of 0.8223.</p><p><strong>Conclusion: </strong>In summary, the proposed MMDLA is conducive to the realization of intelligent antepartum fetal monitoring.</p>\",\"PeriodicalId\":46312,\"journal\":{\"name\":\"Health Information Science and Systems\",\"volume\":\"11 1\",\"pages\":\"16\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2023-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10025176/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Information Science and Systems\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s13755-023-00219-w\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/12/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Information Science and Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s13755-023-00219-w","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
Intelligent antepartum fetal monitoring via deep learning and fusion of cardiotocographic signals and clinical data.
Purpose: Cardiotocography (CTG), which measures uterine contraction (UC) and fetal heart rate (FHR), is a crucial tool for assessing fetal health during pregnancy. However, traditional computerized cardiotocography (cCTG) approaches have non-negligible calibration errors in feature extraction and heavily rely on the expertise and prior experience to define diagnostic features from CTG or FHR signals. Although previous works have studied deep learning methods for extracting CTG or FHR features, these methods still neglect the clinical information of pregnant women.
Methods: In this paper, we proposed a multimodal deep learning architecture (MMDLA) for intelligent antepartum fetal monitoring that is capable of performing automatic CTG feature extraction, fusion with clinical data and classification. The multimodal feature fusion was achieved by concatenating high-level CTG features, which were extracted from preprocessed CTG signals via a convolution neural network (CNN) with six convolution layers and five fully connected layers, and the clinical data of pregnant women. Eventually, light gradient boosting machine (LGBM) was implemented as fetal status assessment classifier. The effectiveness of MMDLA was evaluated using a dataset of 16,355 cases, each of which includes FHR signal, UC signal and pertinent clinical data like maternal age and gestational age.
Results: With an accuracy of 90.77% and an area under the curve (AUC) value of 0.9201, the multimodal features performed admirably. The data imbalance issue was also effectively resolved by the LGBM classifier, with a normal-F1 value of 0.9376 and an abnormal-F1 value of 0.8223.
Conclusion: In summary, the proposed MMDLA is conducive to the realization of intelligent antepartum fetal monitoring.
期刊介绍:
Health Information Science and Systems is a multidisciplinary journal that integrates artificial intelligence/computer science/information technology with health science and services, embracing information science research coupled with topics related to the modeling, design, development, integration and management of health information systems, smart health, artificial intelligence in medicine, and computer aided diagnosis, medical expert systems. The scope includes: i.) smart health, artificial Intelligence in medicine, computer aided diagnosis, medical image processing, medical expert systems ii.) medical big data, medical/health/biomedicine information resources such as patient medical records, devices and equipments, software and tools to capture, store, retrieve, process, analyze, optimize the use of information in the health domain, iii.) data management, data mining, and knowledge discovery, all of which play a key role in decision making, management of public health, examination of standards, privacy and security issues, iv.) development of new architectures and applications for health information systems.