Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey
{"title":"Deep learning semantic segmentation for indoor terrain extraction: Toward better informing free-living wearable gait assessment","authors":"Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey","doi":"10.1109/BSN56160.2022.9928505","DOIUrl":null,"url":null,"abstract":"Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BSN56160.2022.9928505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment
现代步态评估方法使用自由生活环境中的可穿戴设备来捕获习惯信息,与实验室中的数据捕获相比,这更具信息性。可穿戴设备的范围从惯性到基于摄像头的技术,但实际的挑战,如分析来自异质环境的大数据存在。例如,可穿戴相机数据通常需要人工耗时的主观情境化,例如标记地形类型。需要应用自动化方法,例如基于人工智能(AI)的方法所建议的方法。这项试点研究调查了多个分割模型,并提出使用PSPNet深度学习网络来自动实现基于可穿戴摄像头的数据(即视频帧)的二进制室内地板分割掩码。为了为AI方法的发展提供信息,采用了一种独特的方法,从视频共享平台(YouTube)中挖掘异构数据,以提供独立的训练数据。该数据集包含1973个图像帧和相应的分割掩码。当在数据集上进行训练时,所提出的模型在复杂环境中在25个epoch中获得了0.73的Instance over Union分数。所提出的方法将为习惯性自由生活步态评估领域的未来工作提供信息,当与可穿戴惯性衍生步态特征结合使用时,提供自动上下文信息。临床相关性-这里开发的程序将有助于基于视频的自动自由生活步态评估