{"title":"Enhanced recognition of adolescents with schizophrenia and a computational contrast of their neuroanatomy with healthy patients using brainwave signals","authors":"Ejay Nsugbe","doi":"10.1002/ail2.79","DOIUrl":"10.1002/ail2.79","url":null,"abstract":"<p>Schizophrenia is a psychiatric disorder which is prevalent in individuals around the world, where diagnosis methods for this disorder are done via a combination of interview style questioning of the patient alongside a review of their medical record; but these methods have been largely criticised for being subjective between psychiatrists and largely unreplicable. Schizophrenia also occurs in adolescent individuals who have been said to be even more challenging to diagnose largely due to delusions being mistaken for childhood fantasies, and established methods for adult patients being applied to diagnose adolescents. This work investigates the use of electroencephalography (EEG) signals acquired from adolescent patients in the age range of 10–14 years, alongside signal processing methods and machine learning modelling towards the diagnosis of adolescent schizophrenia. The results from the machine learning modelling showed that the linear discriminant analysis (LDA) and fine K-nearest neighbour (KNN) produced the best recognition results for models with easy and hard interpretability, respectively. Additionally, a computational method was applied towards contrasting the neuroanatomical activation patterns in the brain of the schizophrenic and normal adolescents, where it was seen that the neural activation patterns of the normal adolescents showed a greater consistency when compared with the schizophrenics.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.79","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42576192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Johann Wetzel, Kevin Ryczko, Roger Gordon Melko, Isaac Tamblyn
{"title":"Twin neural network regression","authors":"Sebastian Johann Wetzel, Kevin Ryczko, Roger Gordon Melko, Isaac Tamblyn","doi":"10.1002/ail2.78","DOIUrl":"10.1002/ail2.78","url":null,"abstract":"<p>We introduce twin neural network regression (TNNR). This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNNR intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network. Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNNR. We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared with other state-of-the-art methods. Furthermore, TNNR is constrained by self-consistency conditions. We find that the violation of these conditions provides a signal for the prediction uncertainty.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.78","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77494627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harshit Bokadia, Scott Cheng-Hsin Yang, Zhaobin Li, Tomas Folke, Patrick Shafto
{"title":"Evaluating perceptual and semantic interpretability of saliency methods: A case study of melanoma","authors":"Harshit Bokadia, Scott Cheng-Hsin Yang, Zhaobin Li, Tomas Folke, Patrick Shafto","doi":"10.1002/ail2.77","DOIUrl":"10.1002/ail2.77","url":null,"abstract":"<p>In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically only assessed through expensive user studies. Here we propose two complementary metrics to algorithmically evaluate the interpretability of saliency map explanations. One metric assesses perceptual interpretability by quantifying the visual coherence of the saliency map. The second metric assesses semantic interpretability by capturing the degree of overlap between the saliency map and textbook features—features human experts use to make a classification. We use a melanoma dataset and a deep-neural network classifier as a case-study to explore how our two interpretability metrics relate to each other and a faithfulness metric. Across six commonly used saliency methods, we find that none achieves high scores across all three metrics for all test images, but that different methods perform well in different regions of the data distribution. This variation between methods can be leveraged to consistently achieve high interpretability and faithfulness by using our metrics to inform saliency mask selection on a case-by-case basis. Our interpretability metrics provide a new way to evaluate saliency-based explanations and allow for the adaptive combination of saliency-based explanation methods.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.77","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45070752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Priscilla Adong, Engineer Bainomugisha, Deo Okure, Richard Sserunjogi
{"title":"Applying machine learning for large scale field calibration of low-cost PM2.5 and PM10 air pollution sensors","authors":"Priscilla Adong, Engineer Bainomugisha, Deo Okure, Richard Sserunjogi","doi":"10.1002/ail2.76","DOIUrl":"10.1002/ail2.76","url":null,"abstract":"<p>Low-cost air quality monitoring networks can potentially increase the availability of high-resolution monitoring to inform analytic and evidence-informed approaches to better manage air quality. This is particularly relevant in low and middle-income settings where access to traditional reference-grade monitoring networks remains a challenge. However, low-cost air quality sensors are impacted by ambient conditions which could lead to over- or underestimation of pollution concentrations and thus require field calibration to improve their accuracy and reliability. In this paper, we demonstrate the feasibility of using machine learning methods for large-scale calibration of AirQo sensors, low-cost PM sensors custom-designed for and deployed in Sub-Saharan urban settings. The performance of various machine learning methods is assessed by comparing model corrected PM using <i>k</i>-nearest neighbours, support vector regression, multivariate linear regression, ridge regression, lasso regression, elastic net regression, XGBoost, multilayer perceptron, random forest and gradient boosting with collocated reference PM concentrations from a Beta Attenuation Monitor (BAM). To this end, random forest and lasso regression models were superior for PM<sub>2.5</sub> and PM<sub>10</sub> calibration, respectively. Employing the random forest model decreased RMSE of raw data from 18.6 μg/m<sup>3</sup> to 7.2 μg/m<sup>3</sup> with an average BAM PM<sub>2.5</sub> concentration of 37.8 μg/m<sup>3</sup> while the lasso regression model decreased RMSE from 13.4 μg/m<sup>3</sup> to 7.9 μg/m<sup>3</sup> with an average BAM PM<sub>10</sub> concentration of 51.1 μg/m<sup>3</sup>. We validate our models through cross-unit and cross-site validation, allowing analysis of AirQo devices' consistency. The resulting calibration models were deployed to the entire large-scale air quality monitoring network consisting of over 120 AirQo devices, which demonstrates the use of machine learning systems to address practical challenges in a developing world setting.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.76","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48427411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Johannes B. L. G Husom, Pierre Bernabé, Sagar Sen
{"title":"Deep learning to predict power output from respiratory inductive plethysmography data","authors":"Erik Johannes B. L. G Husom, Pierre Bernabé, Sagar Sen","doi":"10.1002/ail2.65","DOIUrl":"10.1002/ail2.65","url":null,"abstract":"<p>Power output is one of the most accurate methods for measuring exercise intensity during outdoor endurance sports, since it records the actual effect of the work performed by the muscles over time. However, power meters are expensive and are limited to activity forms where it is possible to embed sensors in the propulsion system such as in cycling. We investigate using breathing to estimate power output during exercise, in order to create a portable method for tracking physical effort that is universally applicable in many activity forms. Breathing can be quantified through respiratory inductive plethysmography (RIP), which entails recording the movement of the rib cage and abdomen caused by breathing, and it enables us to have a portable, non-invasive device for measuring breathing. RIP signals, heart rate and power output were recorded during a N-of-1 study of a person performing a set of workouts on a stationary bike. The recorded data were used to build predictive models through deep learning algorithms. A convolutional neural network (CNN) trained on features derived from RIP signals and heart rate obtained a mean absolute percentage error (MAPE) of 0.20 (ie, 20% average error). The model showed promising capability of estimating correct power levels and reactivity to changes in power output, but the accuracy is significantly lower than that of cycling power meters.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.65","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46623297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issue Information","authors":"","doi":"10.1002/ail2.25","DOIUrl":"https://doi.org/10.1002/ail2.25","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41467571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Qualitative Investigation in Explainable Artificial Intelligence: Further Insight from Social Science","authors":"Adam J. Johs, Denise E. Agosto, Rosina O. Weber","doi":"10.1002/ail2.64","DOIUrl":"https://doi.org/10.1002/ail2.64","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44715694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony Bourached, Ryan-Rhys Griffiths, Robert Gray, Ashwani Jha, Parashkev Nachev
{"title":"Generative model-enhanced human motion prediction","authors":"Anthony Bourached, Ryan-Rhys Griffiths, Robert Gray, Ashwani Jha, Parashkev Nachev","doi":"10.1002/ail2.63","DOIUrl":"10.1002/ail2.63","url":null,"abstract":"<p>The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here, we formulate a new OoD benchmark based on the Human3.6M and Carnegie Mellon University (CMU) motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state-of-the-art discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in-distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift. The code is available at: https://github.com/bouracha/OoDMotion.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.63","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41505789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning does not Replace Bayesian Modeling: Comparing research use via citation counting","authors":"B. Baldwin","doi":"10.1002/ail2.62","DOIUrl":"https://doi.org/10.1002/ail2.62","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47852004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Gunning, Eric Vorm, Jennifer Yunyan Wang, Matt Turek
{"title":"DARPA's explainable AI (XAI) program: A retrospective","authors":"David Gunning, Eric Vorm, Jennifer Yunyan Wang, Matt Turek","doi":"10.1002/ail2.61","DOIUrl":"10.1002/ail2.61","url":null,"abstract":"<p>Summary of Defense Advanced Research Projects Agency's (DARPA) explainable artificial intelligence (XAI) program from the program managers' and evaluator's perspective.\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.61","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48909197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}