A novel user scenario and behavior sequence recognition approach based on vision-context fusion architecture

IF 8 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wenyu Yuan, Danni Chang, Chenlu Mao, Luyao Wang, Ke Ren, Ting Han
{"title":"A novel user scenario and behavior sequence recognition approach based on vision-context fusion architecture","authors":"Wenyu Yuan,&nbsp;Danni Chang,&nbsp;Chenlu Mao,&nbsp;Luyao Wang,&nbsp;Ke Ren,&nbsp;Ting Han","doi":"10.1016/j.aei.2025.103161","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.</div><div>For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.</div><div>This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103161"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000540","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding user scenario and behavior is essential for the development of human-centered intelligent service systems. However, the presence of cluttered objects, uncertain human behaviors, and overlapping timelines in daily life scenarios complicates the problem of scenario understanding. This paper aims to address the challenges of identifying and predicting user scenario and behavior sequences through a multimodal data fusion approach, focusing on the integration of visual and environmental data to capture subtle scenario and behavioral features.
For the purpose, a novel Vision-Context Fusion Scenario Recognition (VCFSR) approach was proposed, encompassing three stages. First, four categories of context data related to home scenarios were acquired: physical context, time context, user context, and inferred context. Second, scenarios were represented as multidimensional data relationships through modeling technologies. Third, a scenario recognition model was developed, comprising context feature processing, visual feature handling, and multimodal feature fusion. For illustration, a smart home environment was built, and twenty-six participants were recruited to perform various home activities. Integral sensors were used to collect environmental context data, and video data was captured simultaneously, both of which jointly form a multimodal dataset. Results demonstrated that the VCFSR model achieved an average accuracy of 98.1 %, outperforming traditional machine learning models such as decision trees and support vector machines. This method was then employed for fine-grained human behavior sequence prediction tasks, showing good performance in predicting behavior sequences across all scenarios constructed in this study. Furthermore, the results of ablation experiments revealed that the multimodal feature fusion method increased the average accuracy by at least 1.8 % compared to single-modality data-driven methods.
This novel approach to user behavior modeling simultaneously handles the relationship threads across scenarios and the rich details provided by visual data, paving the way for advanced intelligent services in complex interactive environments such as smart homes and hospitals.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Advanced Engineering Informatics
Advanced Engineering Informatics 工程技术-工程:综合
CiteScore
12.40
自引率
18.20%
发文量
292
审稿时长
45 days
期刊介绍: Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信