Modelling object mask interaction for compositional action recognition

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xinya Li, Zhongwei Shen, Benlian Xu, Rongchang Li, Mingli Lu, Jinliang Cong, Longxin Zhang
{"title":"Modelling object mask interaction for compositional action recognition","authors":"Xinya Li, Zhongwei Shen, Benlian Xu, Rongchang Li, Mingli Lu, Jinliang Cong, Longxin Zhang","doi":"10.1007/s40747-025-01823-x","DOIUrl":null,"url":null,"abstract":"<p>Human actions can be abstracted as interactions between humans and objects. The recently proposed task of compositional action recognition emphasizes the independence and combinability of verbs (actions) and nouns (humans or objects) constituting human actions. Nonetheless, most traditional appearance-based action recognition methods usually extract spatial-temporal features from input videos concurrently to understand actions. This approach tends to excessively rely on overall appearance features and lacks precise modelling of interactions between objects, often leading to the neglect of the actions themselves. Consequently, the biases introduced by the appearance prevent the model from effectively generalizing to unseen combinations of actions and objects. To address this issue, we propose a method that explicitly models the object interaction path, aiming to capture interactions between humans and objects. The advantage of this approach is that these interactions are not affected by the object or environmental appearance bias, providing additional clues for appearance-based action recognition methods. Our method can easily be combined with any appearance-based visual encoder, significantly improving the compositional generalization ability of action recognition algorithms. Extensive experimental results on the Something-Else dataset and the IKEA-Assembly dataset demonstrate the effectiveness of our approach.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"91 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01823-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Human actions can be abstracted as interactions between humans and objects. The recently proposed task of compositional action recognition emphasizes the independence and combinability of verbs (actions) and nouns (humans or objects) constituting human actions. Nonetheless, most traditional appearance-based action recognition methods usually extract spatial-temporal features from input videos concurrently to understand actions. This approach tends to excessively rely on overall appearance features and lacks precise modelling of interactions between objects, often leading to the neglect of the actions themselves. Consequently, the biases introduced by the appearance prevent the model from effectively generalizing to unseen combinations of actions and objects. To address this issue, we propose a method that explicitly models the object interaction path, aiming to capture interactions between humans and objects. The advantage of this approach is that these interactions are not affected by the object or environmental appearance bias, providing additional clues for appearance-based action recognition methods. Our method can easily be combined with any appearance-based visual encoder, significantly improving the compositional generalization ability of action recognition algorithms. Extensive experimental results on the Something-Else dataset and the IKEA-Assembly dataset demonstrate the effectiveness of our approach.

为合成动作识别建立物体遮罩交互模型
人的行为可以抽象为人与对象之间的交互。最近提出的组合动作识别任务强调构成人的动作的动词(动作)和名词(人或物)的独立性和可组合性。然而,大多数传统的基于外观的动作识别方法通常同时从输入视频中提取时空特征来理解动作。这种方法往往过度依赖整体外观特征,缺乏对对象之间交互的精确建模,往往导致对动作本身的忽视。因此,由外观引入的偏差阻止了模型有效地推广到看不见的动作和对象的组合。为了解决这个问题,我们提出了一种显式建模对象交互路径的方法,旨在捕获人与对象之间的交互。这种方法的优点是这些交互不受对象或环境外观偏见的影响,为基于外观的动作识别方法提供了额外的线索。我们的方法可以很容易地与任何基于外观的视觉编码器相结合,显著提高了动作识别算法的组合泛化能力。在Something-Else数据集和IKEA-Assembly数据集上的大量实验结果证明了我们方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信