{"title":"人群感应评估中模拟参与者行为的挑战","authors":"Christine Bassem","doi":"10.1109/CCNC51664.2024.10454892","DOIUrl":null,"url":null,"abstract":"In crowdsensing platforms, algorithms and models for task allocation play a critical role in shaping user behaviors, engagement levels, the quality of the collected data, and the performance of the platform as a whole. Regardless of the sensing model, task allocation mechanisms are difficult to evaluate and benchmark. In contrast to evaluating deployments of crowd-sensing platforms with real crowds, they are often evaluated via simulators that are incapable of modeling the complexities of human behavior, specifically in terms of their commitment to the platform and quality of sensing, but their strength is the ability to rapidly experiment with multiple algorithms. In this paper, we abstract the general characteristics of participant behaviors in crowdsensing, and implement these characteristics within the TACSim simulation framework. Further exemplifying the extendability power of that simulation framework, and the benefits it can offer the crowdsensing community.","PeriodicalId":518411,"journal":{"name":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","volume":"106 4","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Challenges of Modeling Participant Behavior in CrowdSensing Evaluation\",\"authors\":\"Christine Bassem\",\"doi\":\"10.1109/CCNC51664.2024.10454892\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In crowdsensing platforms, algorithms and models for task allocation play a critical role in shaping user behaviors, engagement levels, the quality of the collected data, and the performance of the platform as a whole. Regardless of the sensing model, task allocation mechanisms are difficult to evaluate and benchmark. In contrast to evaluating deployments of crowd-sensing platforms with real crowds, they are often evaluated via simulators that are incapable of modeling the complexities of human behavior, specifically in terms of their commitment to the platform and quality of sensing, but their strength is the ability to rapidly experiment with multiple algorithms. In this paper, we abstract the general characteristics of participant behaviors in crowdsensing, and implement these characteristics within the TACSim simulation framework. Further exemplifying the extendability power of that simulation framework, and the benefits it can offer the crowdsensing community.\",\"PeriodicalId\":518411,\"journal\":{\"name\":\"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)\",\"volume\":\"106 4\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCNC51664.2024.10454892\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC51664.2024.10454892","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Challenges of Modeling Participant Behavior in CrowdSensing Evaluation
In crowdsensing platforms, algorithms and models for task allocation play a critical role in shaping user behaviors, engagement levels, the quality of the collected data, and the performance of the platform as a whole. Regardless of the sensing model, task allocation mechanisms are difficult to evaluate and benchmark. In contrast to evaluating deployments of crowd-sensing platforms with real crowds, they are often evaluated via simulators that are incapable of modeling the complexities of human behavior, specifically in terms of their commitment to the platform and quality of sensing, but their strength is the ability to rapidly experiment with multiple algorithms. In this paper, we abstract the general characteristics of participant behaviors in crowdsensing, and implement these characteristics within the TACSim simulation framework. Further exemplifying the extendability power of that simulation framework, and the benefits it can offer the crowdsensing community.