{"title":"Comparing methods for assessing operator functional state","authors":"Olivier Gagnon, M. Parizeau, D. Lafond, J. Gagnon","doi":"10.1109/COGSIMA.2016.7497792","DOIUrl":null,"url":null,"abstract":"The assessment of an operator's functional state (i.e., the multidimensional pattern of human psychophysiological conditions that mediates performance) has great potential for increasing safety and reliability of critical systems. However, live monitoring of functional state using physiological and behavioral data still faces several challenges before achieving the level of precision required in many operational contexts. One open question is the level of granularity of the models. Is a general model sufficient or should subject-specific models be trained to ensure high accuracy? Another challenge concerns the formalization of a valid ground truth for training classifiers. This is critical in order to train models that are operationally relevant. This paper introduces the Decontextualized Dynamic Performance (DDP) metric which allows models to be trained simultaneously on different tasks using machine learning algorithms. This paper reports the performance of various classification algorithms at different levels of granularity. We compare a general model, task-specific models, and subject-specific models. Results show that the classification methods do not lead to statistically different performance, and that the predictive accuracy of subject-specific and task-specific models was actually comparable to a general model. We also compared various time-window sizes for the new DDP metric and found that results were degrading with a larger time window size.","PeriodicalId":194697,"journal":{"name":"2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGSIMA.2016.7497792","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The assessment of an operator's functional state (i.e., the multidimensional pattern of human psychophysiological conditions that mediates performance) has great potential for increasing safety and reliability of critical systems. However, live monitoring of functional state using physiological and behavioral data still faces several challenges before achieving the level of precision required in many operational contexts. One open question is the level of granularity of the models. Is a general model sufficient or should subject-specific models be trained to ensure high accuracy? Another challenge concerns the formalization of a valid ground truth for training classifiers. This is critical in order to train models that are operationally relevant. This paper introduces the Decontextualized Dynamic Performance (DDP) metric which allows models to be trained simultaneously on different tasks using machine learning algorithms. This paper reports the performance of various classification algorithms at different levels of granularity. We compare a general model, task-specific models, and subject-specific models. Results show that the classification methods do not lead to statistically different performance, and that the predictive accuracy of subject-specific and task-specific models was actually comparable to a general model. We also compared various time-window sizes for the new DDP metric and found that results were degrading with a larger time window size.