Ximena Montoya, Frank Díaz, José Félix, Jesus Paucar, J. Ferrer, Pablo Fonseca
{"title":"Seizure detection by analyzing the number of channels selected by cross-correlation using TUH EEG seizure corpus","authors":"Ximena Montoya, Frank Díaz, José Félix, Jesus Paucar, J. Ferrer, Pablo Fonseca","doi":"10.1117/12.2670106","DOIUrl":"https://doi.org/10.1117/12.2670106","url":null,"abstract":"Status epilepticus is caused by a seizure lasting more than 5 minutes or several seizures in this time. For the detection of seizures, encephalograms are visually analyzed by doctors, but this has certain limitations, which can be reduced using algorithms that allow the identification of seizure patterns. Usually, the algorithms use all the channels of the electroencephalography, which causes more computational time. Therefore, the paper proposes an algorithm that seeks to verify that the use of fewer channels chosen for having less cross-correlation can lead to better seizure detection metrics. Of the classification algorithms used, XGBoost is the one that shows a more noticeable difference in sensitivity between 3 channels (80.64%) and 22 channels (78.19%). Also, ”FP1-F7”, ”A1-T3”, ”P3-O1” and ”FP1-F3” are the best channels for seizure detection. Research showed that using fewer channels selected by cross-correlation can improve seizure detection.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124508866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing phoneme distribution for speech modeling","authors":"J. A. Parra, C. Calvache, M. Zañartu","doi":"10.1117/12.2670042","DOIUrl":"https://doi.org/10.1117/12.2670042","url":null,"abstract":"Phonetically balanced texts are used to study different voice and speech characteristics. In the context of clinical work and research, these texts provide a standard for quantifying perceptual, acoustic, or aerodynamic assessments. Recent modeling efforts are being devoted to describing long-term speech behaviors based on a collection of sustained phonemes. However, comprehensive descriptions of phoneme distributions representative of connected speech are not readily available. Thus, the present study introduces a method to estimate phoneme distributions using text data mining, as an alternative to existing power law methods. The procedure used for the decomposition of texts into phonemes, the estimation of the phonetic distributions and the comparisons between different texts, conversational speech, and standard reading passages are discussed. The results are presented using histograms and R-squared determination coefficients for the case of the English language, although the approach can be easily applied for other languages. A discussion of the proposed method, results, and limitations is presented.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125423208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Mendoza, C. Román, Joaquín Molina, C. Poupon, J. F. Mangin, C. Hernández, P. Guevara
{"title":"Superficial white matter shape characterization using hierarchical clustering and a multi-subject bundle atlas","authors":"C. Mendoza, C. Román, Joaquín Molina, C. Poupon, J. F. Mangin, C. Hernández, P. Guevara","doi":"10.1117/12.2669738","DOIUrl":"https://doi.org/10.1117/12.2669738","url":null,"abstract":"The description of the superficial white matter (SWM) functional and structural organization is still an unachieved task. In particular, their shape has not been assessed in detail using diffusion Magnetic Resonance Imaging (dMRI) tractography. This work aims to characterize the different shapes of the short-range association connections present in an SWM multi-subject bundle atlas derived from probabilistic dMRI tractography datasets. First, we calculated a representative centroid shape for each atlas bundle. Next, we computed a distance matrix that encodes the similarity between every pair of centroids. For the distance matrix computation, centroids were first aligned using a streamline-based registration, reducing the 3D spatial separation effect and allowing us to focus only on shape differences. Then, we applied a hierarchical clustering algorithm over the affinity graph derived from the distance matrix. As a result, we obtained ten classes with distinctive shapes, ranging from a straight line form to U and C arrangements. The most predominant shapes were: (i) short open U, (ii) short closed U, and (iii) short C. Moreover, we used the shape information to filter out noisy streamlines in the atlas bundles and applied an automatic segmentation algorithm to 25 subjects of the HCP database. Our results show that the filtering steps help to segment more dense bundles with fewer outliers, improving the identification of the brain’s short fibers.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The transfer learning gap: quantifying transfer learning in a medical image case","authors":"Javier Guerra-Librero, M. Bento, R. Frayne","doi":"10.1117/12.2670071","DOIUrl":"https://doi.org/10.1117/12.2670071","url":null,"abstract":"Transfer learning is a widely used technique in medical imaging and other research fields where a scarcity of available data limit the training of machine learning algorithms. Despite its widespread use and extensive supporting body of research, the specific mechanisms behind transfer learning are not completely understood. In this work, we quantify the effectiveness of transfer learning in medical image classification scenarios for different numbers of training set images. We trained ResNet50, a popular deep learning model used in medical image classification, using two scenarios: 1) applying transfer learning to a pre-trained network and 2) training the same model from scratch (i.e., starting with randomly selected weights). We analyzed the performance of the model under both scenarios as the number of training set images increased from 5,000 to 160,000 medical images. We introduced and evaluated a metric, the transfer learning gap (TLG), to quantify the differences between the two scenarios. The TLG measured the difference in the area under the loss curves (AULCs) when transfer learning was applied and when the model was trained from scratch. Our experiments show that as the training set size increases, the TLG trends to zero, suggesting that the advantage of using transfer learning decreases. The trend in the AULC suggests a training set size where the two scenarios would have equal losses. At this point, the model reaches the same performance regardless of if transfer learning or training from scratch was used. This study is important because it provides a novel metric to understand and quantify the effect of transfer learning.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116213843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luz Itzel Valdeolivar-Hernandez, M. E. Flores Quijano, Juan Carlos Echeverría-Arjonilla, J. Perez-Gonzalez, O. Piña-Ramírez
{"title":"Towards breastfeeding self-efficacy and postpartum depression estimation based on analysis of free-speech interviews through natural language processing","authors":"Luz Itzel Valdeolivar-Hernandez, M. E. Flores Quijano, Juan Carlos Echeverría-Arjonilla, J. Perez-Gonzalez, O. Piña-Ramírez","doi":"10.1117/12.2669883","DOIUrl":"https://doi.org/10.1117/12.2669883","url":null,"abstract":"Edinburgh Postpartum Depression (EPDS) and Breastfeeding Self-Efficacy (BSES) scales are standardized questionnaires to screen for postpartum depression and breastfeeding performance self-perception. On the other hand, Natural Language Processing (NLP) is a machine learning technique that analyses the human language to extract relevant and computer-interpretable information. In this work we proposed the application of an NLP toolchain that includes a typical preprocessing stage and the probabilistic topic modeling performed through the Latent Dirichlet Allocation (LDA) to find out the two most relevant topics within each of six study groups (low, medium, and high scores of BSES and EPDS). Each topic LDA-modeled consisted of 30-word/terms (tokens) which are organized in Venn diagrams, contrasting the mutually exclusive tokens within the low and high scores on each scale. Coherence and log-Perplexity topic modeling performance metrics, were computed. We found that LDA-models have distinguishable tokens between low and high scores of the BSES and EPDS. However, the most remarkable findings were two subset of tokens, one related to newborn care and another to newborn intake, respectively correlated to low and high postpartum depression risk according to EPDS.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126366902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Varghese, Marielena Rivas, S. Cen, X. Lei, Michael Chang, K. Lee, Jamie Gunter, Renata L. Amoedo, Mario Franco, D. Hwang, B. Desai, Kevin G. King, P. Cheng, V. Duddalwar
{"title":"Contrast-Enhanced UltraSound (CEUS)-based characterization solid renal masses: a role for quantitative imaging approaches","authors":"B. Varghese, Marielena Rivas, S. Cen, X. Lei, Michael Chang, K. Lee, Jamie Gunter, Renata L. Amoedo, Mario Franco, D. Hwang, B. Desai, Kevin G. King, P. Cheng, V. Duddalwar","doi":"10.1117/12.2670366","DOIUrl":"https://doi.org/10.1117/12.2670366","url":null,"abstract":"In this prospective study, forty patients with solid renal masses who underwent contrast-enhanced ultrasound (CEUS) examinations were selected. Using the ImageJ software, renal masses and adjacent normal tissue were manually segmented from CEUS cine exams obtained using the built-in RS85 Samsung scanner software. For the radiomics analysis, one frame representing precontrast, early, peak, and delay enhancement phase were selected post segmentation from each CEUS clip. From each region of interest (ROI) within a tumor tissue normalized renal mass, 112 radiomic metrics were extracted using custom Matlab® code. For the time-intensity curve (TIC) analysis, the segmented ROIs were plotted as a function of time, and the data were fit to a washout curve. From these time-signal intensity curves, perfusion quantitative parameters, were generated. Wilcoxon rank sum test or univariate independent t-test depending on data normality were used for descriptive analyses. Agreement was analyzed using Kappa statistic. Of the 40 solid masses, 31 (77.5%) were malignant, 9 (22.5%) were benign based on histopathology. Excellent agreement was found between histopathological confirmation and visual assessment based on CEUS in discriminating solid renal masses into benign vs. malignant categories (κ=0.89 95% confidence interval (CI): (0.77,1)). The total agreement between the two was 92.5%. The sensitivity and specificity of CEUS-based visual assessment was found to be 100% and 66.7%, respectively. Quantitative analysis revealed TIC metrics revealed statistically significant differences between the malignant and benign groups and between clear cell renal cell carcinoma (ccRCC) and papillary renal cell carcinoma (pRCC) subtypes. The study shows excellent agreement between visual assessment and histopathology, but with the room to improve in specificity.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134308422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised white blood cell characterization in the latent space of a variational autoencoder","authors":"J. Tarquino, E. Romero","doi":"10.1117/12.2669746","DOIUrl":"https://doi.org/10.1117/12.2669746","url":null,"abstract":"Leukemia diagnosis and therapy planning are both based on classifying peripheral blood images, under a high inter/intra observer variability scenario. In such applications, automatic image processing and classification strategies have obtained outstanding recognition results, however they are fully dependent on the quality of the annotated data. Unlike supervised classification approaches which built upon label-transformations, the herein presented methodology introduces an unsupervised White Blood Cell characterization in the latent space of a Variational Autoencoder (VAE). The latent space is constructed upon 128 parameters from 64 gaussian distributions and then a k-means clustering may retrieve leukemia diagnostic meaningful cell groups. The whole procedure is twofold assessed: 1) evaluation of the 128 dimension VAE latent space for differentiating cells with higher diagnostic value (blast cells) from other peripheral blood components under multiple supervised classification strategies, and 2) quantification of VAE-parameter clustering capacity to unsupervised separation of blast and non-blast cells. Obtained accuracies of each experiment, 0.888 and 0.757 respectively, suggest that the presented strategy successfully characterizes white blood cells and provides a representation space where subtle cell differences can be objectively measured.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127806049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monitoring of lung ultrasound acquisition using volume sweep imaging protocol","authors":"Naomi Guevara, Ximena Montoya, Rodrigo Alarcón, B. Castañeda, Stefano Enrique Romero","doi":"10.1117/12.2670172","DOIUrl":"https://doi.org/10.1117/12.2670172","url":null,"abstract":"The lung is vulnerable to suffering from different diseases, for its diagnosis one of the imaging technologies is ultrasound; however, it requires an expert to perform the acquisition and interpretation of the same. In Peru, many rural areas have little technology and untrained personnel, so one solution is to divide the acquisition and diagnosis. In this way, the radiologist only receives the video and performs the diagnosis through it and the local personnel is trained to perform the Volume Sweep Imaging protocol; however, there are some mistakes during its performance which are detected lately. To solve this problem, the paper proposes the use of three different algorithms: Adaptive Threshold, Threshold + non-consecutive minimum distance analysis and Derivative Analysis + Peak Detection, which show a better accuracy in the longitudinal direction of 77.96%; nevertheless, their work principle needs to be reformulated for transverse acquisition, where accuracy is less than 50%. This result show is possible to validate the protocol for lung ultrasound and give feedback to the locally trained physicians.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116878213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kohler, L. Neves, A. Zimbico, J. Maia, A. Assef, E. Costa
{"title":"Qualitative and quantitative comparison of simulated ultrasound images generated by delay-and-sum and minimum variance techniques","authors":"M. Kohler, L. Neves, A. Zimbico, J. Maia, A. Assef, E. Costa","doi":"10.1117/12.2669929","DOIUrl":"https://doi.org/10.1117/12.2669929","url":null,"abstract":"The beamforming technique has been successfully used for real-time ultrasound imaging and applications. Traditionally, the most commercially available ultrasound systems still implement standard Delay-and-Sum (DAS) beamforming for B-mode imaging. This technique performs the time delay and coherent summation of ultrasonic radiofrequency (RF) echoes received by individual transducer elements to align the backscattered signals at the focal point. However, the transducer aperture size and system operating frequency limit the image resolution and contrast achievable with DAS. For this reason, new methods based on adaptive beamforming algorithms, such as Minimum Variance (MV), have been studied to improve the quality of the signal received by the transducer and reduce the effects of noise and interference. This work compares a B-mode ultrasound image generated by the DAS technique and the MV combined with DAS beamforming using Field II acoustic field simulation software. A simulated phantom with 18 targets, separated into three groups, and surrounded by a uniform background, was created. For qualitative analysis, two-dimensional and three-dimensional images simulated using DAS and MV beamformers are presented. The quantitative analyses were employed to compare the performance of the MV over the DAS beamforming using axial and lateral full width at half maximum (FWHM) and geometric distortion ratio (GDR) measurements of the central target group. According to those metrics, no significant changes were observed regarding the axial FWHM. However, the MV method considerably reduced the lateral FWHM by more than 40%, with a minimum GDR of 37%.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117342216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imputing missing electroencephalography data using graph signal processing","authors":"Alejandro J. Weinstein","doi":"10.1117/12.2669735","DOIUrl":"https://doi.org/10.1117/12.2669735","url":null,"abstract":"Graph Signal Processing (GSP) is a framework for analyzing signals defined over a graph. Considering the electrodes used to record the electroencephalogram (EEG) as a sensor network makes it possible to use GSP to analyze EEG signals. Using the graph over which the signal is defined allows one to take advantage of a signal structure that is ignored by classic signal processing approaches. However, there are many details about how to use GSP to analyze the EEG that are not studied in the literature. Here we show an example of how to impute missing EEG data using GSP. We show that GSP allows reconstructing EEG missing data with a lower error than a classic approach based on radial basis functions, confirming that the underlying graph over a graph over which the signal is defined contains relevant information that can be exploited to improve a given signal processing task. By studying two approaches for building the graph (k-nearest neighbors and a thresholded Gaussian kernel) and the effect of its parameter, we highlight the importance of building the graph appropriately. These results show the potential of incorporating GPS techniques into the EEG processing pipeline.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"1928 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126890204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}