调查基于智能手机传感器的评估的测量等效性:远程、数字化、自带设备研究。

IF 5.8 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Lito Kriara, Frank Dondelinger, Luca Capezzuto, Corrado Bernasconi, Florian Lipsmeier, Adriano Galati, Michael Lindemann
{"title":"调查基于智能手机传感器的评估的测量等效性:远程、数字化、自带设备研究。","authors":"Lito Kriara, Frank Dondelinger, Luca Capezzuto, Corrado Bernasconi, Florian Lipsmeier, Adriano Galati, Michael Lindemann","doi":"10.2196/63090","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Floodlight Open is a global, open-access, fully remote, digital-only study designed to understand the drivers and barriers in deployment and persistence of use of a smartphone app for measuring functional impairment in a naturalistic setting and broad study population.</p><p><strong>Objective: </strong>This study aims to assess measurement equivalence properties of the Floodlight Open app across operating system (OS) platforms, OS versions, and smartphone device models.</p><p><strong>Methods: </strong>Floodlight Open enrolled adult participants with and without self-declared multiple sclerosis (MS). The study used the Floodlight Open app, a \"bring-your-own-device\" (BYOD) solution that remotely measured MS-related functional ability via smartphone sensor-based active tests. Measurement equivalence was assessed in all evaluable participants by comparing the performance on the 6 active tests (ie, tests requiring active input from the user) included in the app across OS platforms (iOS vs Android), OS versions (iOS versions 11-15 and separately Android versions 8-10; comparing each OS version with the other OS versions pooled together), and device models (comparing each device model with all remaining device models pooled together). The tests in scope were Information Processing Speed, Information Processing Speed Digit-Digit (measuring reaction speed), Pinching Test (PT), Static Balance Test, U-Turn Test, and 2-Minute Walk Test. Group differences were assessed by permutation test for the mean difference after adjusting for age, sex, and self-declared MS disease status.</p><p><strong>Results: </strong>Overall, 1976 participants using 206 different device models were included in the analysis. Differences in test performance between subgroups were very small or small, with percent differences generally being ≤5% on the Information Processing Speed, Information Processing Speed Digit-Digit, U-Turn Test, and 2-Minute Walk Test; <20% on the PT; and <30% on the Static Balance Test. No statistically significant differences were observed between OS platforms other than on the PT (P<.001). Similarly, differences across iOS or Android versions were nonsignificant after correcting for multiple comparisons using false discovery rate correction (all adjusted P>.05). Comparing the different device models revealed a statistically significant difference only on the PT for 4 out of 17 models (adjusted P≤.001-.03).</p><p><strong>Conclusions: </strong>Consistent with the hypothesis that smartphone sensor-based measurements obtained with different devices are equivalent, this study showed no evidence of a systematic lack of measurement equivalence across OS platforms, OS versions, and device models on 6 active tests included in the Floodlight Open app. These results are compatible with the use of smartphone-based tests in a bring-your-own-device setting, but more formal tests of equivalence would be needed.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"27 ","pages":"e63090"},"PeriodicalIF":5.8000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12006779/pdf/","citationCount":"0","resultStr":"{\"title\":\"Investigating Measurement Equivalence of Smartphone Sensor-Based Assessments: Remote, Digital, Bring-Your-Own-Device Study.\",\"authors\":\"Lito Kriara, Frank Dondelinger, Luca Capezzuto, Corrado Bernasconi, Florian Lipsmeier, Adriano Galati, Michael Lindemann\",\"doi\":\"10.2196/63090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Floodlight Open is a global, open-access, fully remote, digital-only study designed to understand the drivers and barriers in deployment and persistence of use of a smartphone app for measuring functional impairment in a naturalistic setting and broad study population.</p><p><strong>Objective: </strong>This study aims to assess measurement equivalence properties of the Floodlight Open app across operating system (OS) platforms, OS versions, and smartphone device models.</p><p><strong>Methods: </strong>Floodlight Open enrolled adult participants with and without self-declared multiple sclerosis (MS). The study used the Floodlight Open app, a \\\"bring-your-own-device\\\" (BYOD) solution that remotely measured MS-related functional ability via smartphone sensor-based active tests. Measurement equivalence was assessed in all evaluable participants by comparing the performance on the 6 active tests (ie, tests requiring active input from the user) included in the app across OS platforms (iOS vs Android), OS versions (iOS versions 11-15 and separately Android versions 8-10; comparing each OS version with the other OS versions pooled together), and device models (comparing each device model with all remaining device models pooled together). The tests in scope were Information Processing Speed, Information Processing Speed Digit-Digit (measuring reaction speed), Pinching Test (PT), Static Balance Test, U-Turn Test, and 2-Minute Walk Test. Group differences were assessed by permutation test for the mean difference after adjusting for age, sex, and self-declared MS disease status.</p><p><strong>Results: </strong>Overall, 1976 participants using 206 different device models were included in the analysis. Differences in test performance between subgroups were very small or small, with percent differences generally being ≤5% on the Information Processing Speed, Information Processing Speed Digit-Digit, U-Turn Test, and 2-Minute Walk Test; <20% on the PT; and <30% on the Static Balance Test. No statistically significant differences were observed between OS platforms other than on the PT (P<.001). Similarly, differences across iOS or Android versions were nonsignificant after correcting for multiple comparisons using false discovery rate correction (all adjusted P>.05). Comparing the different device models revealed a statistically significant difference only on the PT for 4 out of 17 models (adjusted P≤.001-.03).</p><p><strong>Conclusions: </strong>Consistent with the hypothesis that smartphone sensor-based measurements obtained with different devices are equivalent, this study showed no evidence of a systematic lack of measurement equivalence across OS platforms, OS versions, and device models on 6 active tests included in the Floodlight Open app. These results are compatible with the use of smartphone-based tests in a bring-your-own-device setting, but more formal tests of equivalence would be needed.</p>\",\"PeriodicalId\":16337,\"journal\":{\"name\":\"Journal of Medical Internet Research\",\"volume\":\"27 \",\"pages\":\"e63090\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12006779/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Internet Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2196/63090\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/63090","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

研究背景Floodlight Open是一项全球性、开放存取、完全远程、纯数字研究,旨在了解在自然环境和广泛研究人群中使用智能手机应用程序测量功能障碍的驱动因素和障碍:本研究旨在评估 Floodlight Open 应用程序在不同操作系统(OS)平台、OS 版本和智能手机设备型号之间的测量等效性:Floodlight Open招募了患有和未患有自我宣布的多发性硬化症(MS)的成年参与者。该研究使用 Floodlight Open 应用程序,这是一种 "自带设备"(BYOD)解决方案,可通过智能手机传感器进行主动测试,远程测量与多发性硬化症相关的功能能力。通过比较不同操作系统平台(iOS 与 Android)、不同操作系统版本(iOS 11-15 版和单独的 Android 8-10 版;将每个操作系统版本与其他操作系统版本集中在一起进行比较)和不同设备型号(将每个设备型号与所有其他设备型号集中在一起进行比较)在应用程序中包含的 6 项主动测试(即需要用户主动输入的测试)的表现,对所有可评估参与者的测量等效性进行了评估。测试范围包括信息处理速度、信息处理速度数字-数字(测量反应速度)、捏紧测试(PT)、静态平衡测试、掉头测试和 2 分钟步行测试。在对年龄、性别和自我宣称的多发性硬化症疾病状况进行调整后,通过对平均差异的置换检验来评估组间差异:共有 1976 名参与者使用 206 种不同型号的设备参与分析。亚组之间的测试成绩差异非常小或很小,在信息处理速度、信息处理速度数字-数字、掉头测试和两分钟步行测试中,百分比差异一般不超过 5%;.05)。对不同型号的设备进行比较后发现,在 17 种型号中,只有 4 种型号在 PT 方面存在显著统计学差异(调整后 P≤.001-.03 ):本研究表明,在 Floodlight Open 应用程序中包含的 6 项主动测试中,没有证据表明不同操作系统平台、操作系统版本和设备型号之间缺乏系统性的测量等效性,这与使用不同设备获得的基于智能手机传感器的测量结果是等效的这一假设是一致的。这些结果与在自带设备环境中使用基于智能手机的测试是一致的,但还需要对等效性进行更正式的测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Investigating Measurement Equivalence of Smartphone Sensor-Based Assessments: Remote, Digital, Bring-Your-Own-Device Study.

Background: Floodlight Open is a global, open-access, fully remote, digital-only study designed to understand the drivers and barriers in deployment and persistence of use of a smartphone app for measuring functional impairment in a naturalistic setting and broad study population.

Objective: This study aims to assess measurement equivalence properties of the Floodlight Open app across operating system (OS) platforms, OS versions, and smartphone device models.

Methods: Floodlight Open enrolled adult participants with and without self-declared multiple sclerosis (MS). The study used the Floodlight Open app, a "bring-your-own-device" (BYOD) solution that remotely measured MS-related functional ability via smartphone sensor-based active tests. Measurement equivalence was assessed in all evaluable participants by comparing the performance on the 6 active tests (ie, tests requiring active input from the user) included in the app across OS platforms (iOS vs Android), OS versions (iOS versions 11-15 and separately Android versions 8-10; comparing each OS version with the other OS versions pooled together), and device models (comparing each device model with all remaining device models pooled together). The tests in scope were Information Processing Speed, Information Processing Speed Digit-Digit (measuring reaction speed), Pinching Test (PT), Static Balance Test, U-Turn Test, and 2-Minute Walk Test. Group differences were assessed by permutation test for the mean difference after adjusting for age, sex, and self-declared MS disease status.

Results: Overall, 1976 participants using 206 different device models were included in the analysis. Differences in test performance between subgroups were very small or small, with percent differences generally being ≤5% on the Information Processing Speed, Information Processing Speed Digit-Digit, U-Turn Test, and 2-Minute Walk Test; <20% on the PT; and <30% on the Static Balance Test. No statistically significant differences were observed between OS platforms other than on the PT (P<.001). Similarly, differences across iOS or Android versions were nonsignificant after correcting for multiple comparisons using false discovery rate correction (all adjusted P>.05). Comparing the different device models revealed a statistically significant difference only on the PT for 4 out of 17 models (adjusted P≤.001-.03).

Conclusions: Consistent with the hypothesis that smartphone sensor-based measurements obtained with different devices are equivalent, this study showed no evidence of a systematic lack of measurement equivalence across OS platforms, OS versions, and device models on 6 active tests included in the Floodlight Open app. These results are compatible with the use of smartphone-based tests in a bring-your-own-device setting, but more formal tests of equivalence would be needed.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
14.40
自引率
5.40%
发文量
654
审稿时长
1 months
期刊介绍: The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades. As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor. Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信