{"title":"时间与行动的挂毯:利用时点过程流建模人类活动序列","authors":"Vinayak Gupta, Srikanta Bedathur","doi":"10.1145/3650045","DOIUrl":null,"url":null,"abstract":"<p>Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. These activities can range from the simplest daily routines, like walking and sitting, to multi-level complex endeavors such as cooking a four-course meal. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature – the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next-action recommendation, <i>etc.</i> Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, <i>i.e.</i>, limited to next action or goal prediction. In this paper, we present <span>ProActive</span>, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems – next action prediction, sequence-goal prediction, and <i>end-to-end</i> sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. Moreover, for time-sensitive prediction, we perform an <i>early</i> detection of sequence goal via a constrained margin-based optimization procedure. This in-turn allows <span>ProActive</span> to predict the sequence goal using a limited number of actions. In addition, we propose a novel addition over the <span>ProActive</span> model, called <span>ProActive++</span>, that can handle variations in the order of actions, <i>i.e.</i>, different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of our <span>ProActive</span> and <span>ProActive++</span> over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tapestry of Time and Actions: Modeling Human Activity Sequences using Temporal Point Process Flows\",\"authors\":\"Vinayak Gupta, Srikanta Bedathur\",\"doi\":\"10.1145/3650045\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. These activities can range from the simplest daily routines, like walking and sitting, to multi-level complex endeavors such as cooking a four-course meal. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature – the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next-action recommendation, <i>etc.</i> Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, <i>i.e.</i>, limited to next action or goal prediction. In this paper, we present <span>ProActive</span>, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems – next action prediction, sequence-goal prediction, and <i>end-to-end</i> sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. Moreover, for time-sensitive prediction, we perform an <i>early</i> detection of sequence goal via a constrained margin-based optimization procedure. This in-turn allows <span>ProActive</span> to predict the sequence goal using a limited number of actions. In addition, we propose a novel addition over the <span>ProActive</span> model, called <span>ProActive++</span>, that can handle variations in the order of actions, <i>i.e.</i>, different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of our <span>ProActive</span> and <span>ProActive++</span> over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation.</p>\",\"PeriodicalId\":48967,\"journal\":{\"name\":\"ACM Transactions on Intelligent Systems and Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-02-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Intelligent Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3650045\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Intelligent Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3650045","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Tapestry of Time and Actions: Modeling Human Activity Sequences using Temporal Point Process Flows
Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. These activities can range from the simplest daily routines, like walking and sitting, to multi-level complex endeavors such as cooking a four-course meal. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature – the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next-action recommendation, etc. Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, i.e., limited to next action or goal prediction. In this paper, we present ProActive, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems – next action prediction, sequence-goal prediction, and end-to-end sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. Moreover, for time-sensitive prediction, we perform an early detection of sequence goal via a constrained margin-based optimization procedure. This in-turn allows ProActive to predict the sequence goal using a limited number of actions. In addition, we propose a novel addition over the ProActive model, called ProActive++, that can handle variations in the order of actions, i.e., different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of our ProActive and ProActive++ over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation.
期刊介绍:
ACM Transactions on Intelligent Systems and Technology is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world.
ACM TIST is published quarterly (six issues a year). Each issue has 8-11 regular papers, with around 20 published journal pages or 10,000 words per paper. Additional references, proofs, graphs or detailed experiment results can be submitted as a separate appendix, while excessively lengthy papers will be rejected automatically. Authors can include online-only appendices for additional content of their published papers and are encouraged to share their code and/or data with other readers.