Patrick Lucey, Jessica Howlett, Jeff Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar
{"title":"通过更好地利用时间信息来改善疼痛识别。","authors":"Patrick Lucey, Jessica Howlett, Jeff Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a \"pain-no pain\" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.</p>","PeriodicalId":90534,"journal":{"name":"International Conference on Auditory-Visual Speech Processing","volume":"2008 ","pages":"167-172"},"PeriodicalIF":0.0000,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4180942/pdf/nihms99686.pdf","citationCount":"0","resultStr":"{\"title\":\"Improving Pain Recognition Through Better Utilisation of Temporal Information.\",\"authors\":\"Patrick Lucey, Jessica Howlett, Jeff Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a \\\"pain-no pain\\\" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.</p>\",\"PeriodicalId\":90534,\"journal\":{\"name\":\"International Conference on Auditory-Visual Speech Processing\",\"volume\":\"2008 \",\"pages\":\"167-172\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4180942/pdf/nihms99686.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Auditory-Visual Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Auditory-Visual Speech Processing","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Pain Recognition Through Better Utilisation of Temporal Information.
Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a "pain-no pain" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.