Online Fall Detection Using Attended Memory Reference Network

Sunah Min, Jinyoung Moon
{"title":"Online Fall Detection Using Attended Memory Reference Network","authors":"Sunah Min, Jinyoung Moon","doi":"10.1109/ICAIIC51459.2021.9415258","DOIUrl":null,"url":null,"abstract":"Falls cause serious injuries that make daily activities difficult; therefore, they are a common target action for intelligent monitoring systems. Existing vision-based methods for fall actions classify well-trimmed short videos as either fall or non-fall actions. However, critical limitations exist when applying these methods to untrimmed videos including fall and non-fall actions as well as background. These methods can determine whether there is a fall or not for an input video with many frames related to either fall or non-fall actions. In addition, these methods require offline processing for a whole video as input, while there is strong demand for quicker responses to fall injuries provided by online fall detection. To this end, we introduce an attended memory reference network that detects a current action online for a given video segment consisting of past and current frames. To integrate contextual information used for detecting a current action, we propose a new recurrent unit, called an attended memory reference unit, which accumulates input information based on visual memory attended by current information. In an experiment using a fall detection dataset obtained from the abnormal event detection dataset for CCTV videos publicized by AI Hub, the proposed method outperforms state-of-the-art online action detection methods. By conducting ablation studies, we also demonstrate the effectiveness of the proposed modules related to the attended visual memory.","PeriodicalId":432977,"journal":{"name":"2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC51459.2021.9415258","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Falls cause serious injuries that make daily activities difficult; therefore, they are a common target action for intelligent monitoring systems. Existing vision-based methods for fall actions classify well-trimmed short videos as either fall or non-fall actions. However, critical limitations exist when applying these methods to untrimmed videos including fall and non-fall actions as well as background. These methods can determine whether there is a fall or not for an input video with many frames related to either fall or non-fall actions. In addition, these methods require offline processing for a whole video as input, while there is strong demand for quicker responses to fall injuries provided by online fall detection. To this end, we introduce an attended memory reference network that detects a current action online for a given video segment consisting of past and current frames. To integrate contextual information used for detecting a current action, we propose a new recurrent unit, called an attended memory reference unit, which accumulates input information based on visual memory attended by current information. In an experiment using a fall detection dataset obtained from the abnormal event detection dataset for CCTV videos publicized by AI Hub, the proposed method outperforms state-of-the-art online action detection methods. By conducting ablation studies, we also demonstrate the effectiveness of the proposed modules related to the attended visual memory.
使用出席记忆参考网络的在线跌倒检测
跌倒会造成严重伤害,使日常活动变得困难;因此,它们是智能监控系统常见的目标动作。现有的基于视觉的跌倒动作方法将精心修剪的短视频分为跌倒或非跌倒动作。然而,在将这些方法应用于未修剪的视频(包括跌倒和非跌倒动作以及背景)时,存在严重的局限性。这些方法可以确定输入视频是否有跌倒,其中有许多帧与跌倒或非跌倒动作相关。此外,这些方法需要对整个视频进行离线处理作为输入,而在线跌倒检测对跌倒损伤的快速响应有着强烈的需求。为此,我们引入了一个参与内存参考网络,该网络在线检测由过去和当前帧组成的给定视频片段的当前动作。为了整合用于检测当前动作的上下文信息,我们提出了一种新的循环单元,称为参与记忆参考单元,它基于当前信息参与的视觉记忆积累输入信息。在使用AI Hub公布的CCTV视频异常事件检测数据集获得的跌倒检测数据集的实验中,所提出的方法优于最先进的在线动作检测方法。通过进行消融研究,我们也证明了所提出的与参与视觉记忆相关的模块的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信