Alan H Gee, Diego Garcia-Olano, Joydeep Ghosh, David Paydarfar
{"title":"Explaining Deep Classification of Time-Series Data with Learned Prototypes.","authors":"Alan H Gee, Diego Garcia-Olano, Joydeep Ghosh, David Paydarfar","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or \"prototypes\" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2429 ","pages":"15-22"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8050893/pdf/nihms-1668684.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38884015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The New SNOMED CT International Medicinal Product Model.","authors":"Olivier Bodenreider, Julie James","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objectives: </strong>To present the new SNOMED CT international medicinal product model.</p><p><strong>Methods: </strong>We present the main elements of the model, with focus on types of entities and their interrelations, definitional attributes for clinical drugs, and categories of groupers.</p><p><strong>Results: </strong>We present the status of implementation as of July 2018 and illustrate differences between the original and new models through an example.</p><p><strong>Conclusions: </strong>Benefits of the new medicinal product model include comprehensive representation of clinical drugs, logical definitions with necessary and sufficient conditions for all medicinal product entities, better high-level organization through distinct categories of groupers, and compliance with international standards.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2285 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584358/pdf/nihms-1840460.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40665758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amelie Gyrard, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, Amit Sheth
{"title":"Personalized Health Knowledge Graph.","authors":"Amelie Gyrard, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, Amit Sheth","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Our current health applications do not adequately take into account contextual and personalized knowledge about patients. In order to design \"Personalized Coach for Healthcare\" applications to manage chronic diseases, there is a need to create a Personalized Healthcare Knowledge Graph (PHKG) that takes into consideration a patient's health condition (personalized knowledge) and enriches that with contextualized knowledge from environmental sensors and Web of Data (e.g., symptoms and treatments for diseases). To develop PHKG, aggregating knowledge from various heterogeneous sources such as the Internet of Things (IoT) devices, clinical notes, and Electronic Medical Records (EMRs) is necessary. In this paper, we explain the challenges of collecting, managing, analyzing, and integrating patients' health data from various sources in order to synthesize and deduce meaningful information embodying the vision of the Data, Information, Knowledge, and Wisdom (DIKW) pyramid. Furthermore, we sketch a solution that combines: 1) IoT data analytics, and 2) explicit knowledge and illustrate it using three chronic disease use cases - asthma, obesity, and Parkinson's.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2317 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8532078/pdf/nihms-1743812.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39551742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa
{"title":"Towards automated pain detection in children using facial and electrodermal activity","authors":"Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa","doi":"10.1007/978-3-030-12738-1_13","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_13","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42071686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojing Xu, Kenneth D Craig, Damaris Diaz, Matthew S Goodwin, Murat Akcakaya, Büşra Tuğçe Susam, Jeannie S Huang, Virginia R de Sa
{"title":"Automated Pain Detection in Facial Videos of Children using Human-Assisted Transfer Learning.","authors":"Xiaojing Xu, Kenneth D Craig, Damaris Diaz, Matthew S Goodwin, Murat Akcakaya, Büşra Tuğçe Susam, Jeannie S Huang, Virginia R de Sa","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352979/pdf/nihms-1001649.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa
{"title":"Automated pain detection in facial videos of children using human-assisted transfer learning","authors":"Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa","doi":"10.1007/978-3-030-12738-1_12","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_12","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-12738-1_12","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49169328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojing Xu, Büsra Tuğce Susam, Hooman Nezamfar, Damaris Diaz, Kenneth D Craig, Matthew S Goodwin, Murat Akcakaya, Jeannie S Huang, R de Sa Virginia
{"title":"Towards Automated Pain Detection in Children using Facial and Electrodermal Activity.","authors":"Xiaojing Xu, Büsra Tuğce Susam, Hooman Nezamfar, Damaris Diaz, Kenneth D Craig, Matthew S Goodwin, Murat Akcakaya, Jeannie S Huang, R de Sa Virginia","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electro- dermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352962/pdf/nihms-1001656.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41175227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani
{"title":"Personalizing Mobile Fitness Apps using Reinforcement Learning.","authors":"Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Despite the vast number of mobile fitness applications (apps) and their potential advantages in promoting physical activity, many existing apps lack behavior-change features and are not able to maintain behavior change motivation. This paper describes a novel fitness app called CalFit, which implements important behavior-change features like dynamic goal setting and self-monitoring. CalFit uses a reinforcement learning algorithm to generate personalized daily step goals that are challenging but attainable. We conducted the Mobile Student Activity Reinforcement (mSTAR) study with 13 college students to evaluate the efficacy of the CalFit app. The control group (receiving goals of 10,000 steps/day) had a decrease in daily step count of 1,520 (SD ± 740) between baseline and 10-weeks, compared to an increase of 700 (SD ± 830) in the intervention group (receiving personalized step goals). The difference in daily steps between the two groups was 2,220, with a statistically significant <i>p</i> = 0.039.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2068 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220419/pdf/nihms966774.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MiHyun Jang, Tejal Patted, Yolanda Gil, Daniel Garijo, Varun Ratnakar, Jie Ji, Prince Wang, Aggie McMahon, Paul M Thompson, Neda Jahanshad
{"title":"Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata.","authors":"MiHyun Jang, Tejal Patted, Yolanda Gil, Daniel Garijo, Varun Ratnakar, Jie Ji, Prince Wang, Aggie McMahon, Paul M Thompson, Neda Jahanshad","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Scientific collaborations involving multiple institutions are increasingly commonplace. It is not unusual for publications to have dozens or hundreds of authors, in some cases even a few thousands. Gathering the information for such papers may be very time consuming, since the author list must include authors who made different kinds of contributions and whose affiliations are hard to track. Similarly, when datasets are contributed by multiple institutions, the collection and processing details may also be hard to assemble due to the many individuals involved. We present our work to date on automatically generating author lists and other portions of scientific papers for multi-institutional collaborations based on the metadata created to represent the people, data, and activities involved. Our initial focus is ENIGMA, a large international collaboration for neuroimaging genetics.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"1931 ","pages":"63-70"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6053267/pdf/nihms980712.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36333360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection.","authors":"Farig Sadeque, Dongfang Xu, Steven Bethard","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users' posts to Reddit. In this paper we present the techniques employed for the University of Arizona team's participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"1866 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5654552/pdf/nihms912392.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35552112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}