Wenyu Yuan, Danni Chang, Chenlu Mao, Luyao Wang, Ke Ren, Ting Han
{"title":"A novel user scenario and behavior sequence recognition approach based on vision-context fusion architecture","authors":"Wenyu Yuan, Danni Chang, Chenlu Mao, Luyao Wang, Ke Ren, Ting Han","doi":"10.1016/j.aei.2025.103161","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.</div><div>For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.</div><div>This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103161"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000540","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.
For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.
This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.