Hao-Chuan Wang, Tau-Heng Yeo, Syavash Nobarany, Gary Hsieh
{"title":"Problem with Cross-Cultural Comparison of User-Generated Ratings on Mechanical Turk","authors":"Hao-Chuan Wang, Tau-Heng Yeo, Syavash Nobarany, Gary Hsieh","doi":"10.1145/2739999.2740001","DOIUrl":null,"url":null,"abstract":"Many online services serve diverse populations spanning many countries and cultures. Some of these services rely on user-generated ratings to curate and filter information, or to inform other users. However, little is known about how various cultural biases and cross-cultural differences affect such ratings. We studied how Indian and American workers on Mechanical Turk differ in their response styles by asking them to rate three products. We also explored several dimensions of cultural differences including social orientation (individualism vs. collectivism), social desirability, and thinking style (holistic vs. analytic). We found that Indian workers tended to use higher ratings on all items, including both product ratings and the different survey instruments. We discussed the implications for collecting ratings from culturally diverse populations, and for cross-cultural studies on Mechanical Turk.","PeriodicalId":115346,"journal":{"name":"Proceedings of the Third International Symposium of Chinese CHI","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third International Symposium of Chinese CHI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2739999.2740001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Many online services serve diverse populations spanning many countries and cultures. Some of these services rely on user-generated ratings to curate and filter information, or to inform other users. However, little is known about how various cultural biases and cross-cultural differences affect such ratings. We studied how Indian and American workers on Mechanical Turk differ in their response styles by asking them to rate three products. We also explored several dimensions of cultural differences including social orientation (individualism vs. collectivism), social desirability, and thinking style (holistic vs. analytic). We found that Indian workers tended to use higher ratings on all items, including both product ratings and the different survey instruments. We discussed the implications for collecting ratings from culturally diverse populations, and for cross-cultural studies on Mechanical Turk.