{"title":"贿选在社会上是可取的吗?“失败”列表实验的探索性分析","authors":"Sophia Hatz, Hanne Fjelde, David Randahl","doi":"10.1007/s11135-023-01740-6","DOIUrl":null,"url":null,"abstract":"Abstract List experiments encourage survey respondents to report sensitive opinions they may prefer not to reveal. But, studies sometimes find that respondents admit more readily to sensitive opinions when asked directly. Often this over-reporting is viewed as a design failure, attributable to inattentiveness or other nonstrategic error. This paper conducts an exploratory analysis of such a ‘failed’ list experiment measuring vote buying in the 2019 Nigerian presidential election. We take this opportunity to explore our assumptions about vote buying. Although vote buying is illegal and stigmatized in many countries, a significant literature links such exchanges to patron-client networks that are imbued with trust, reciprocity and long-standing benefits, which might create incentives for individuals to claim having been offered to participate in vote buying. Submitting our data to a series of tests of design, we find that over-reporting is strategic: respondents intentionally reveal vote buying and it’s likely that those who reveal vote buying have in fact being offered to participate in vote buying. Considering reasons for over-reporting such as social desirability and network benefits, and the strategic nature of over-reporting, we suggest that “design failure\" is not the only possible conclusion from unexpected list experiment results. With this paper we show that our theoretical assumptions about sensitivity bias affect the conclusions we can draw from a list experiment.","PeriodicalId":49649,"journal":{"name":"Quality & Quantity","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Could vote buying be socially desirable? Exploratory analyses of a ‘failed’ list experiment\",\"authors\":\"Sophia Hatz, Hanne Fjelde, David Randahl\",\"doi\":\"10.1007/s11135-023-01740-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract List experiments encourage survey respondents to report sensitive opinions they may prefer not to reveal. But, studies sometimes find that respondents admit more readily to sensitive opinions when asked directly. Often this over-reporting is viewed as a design failure, attributable to inattentiveness or other nonstrategic error. This paper conducts an exploratory analysis of such a ‘failed’ list experiment measuring vote buying in the 2019 Nigerian presidential election. We take this opportunity to explore our assumptions about vote buying. Although vote buying is illegal and stigmatized in many countries, a significant literature links such exchanges to patron-client networks that are imbued with trust, reciprocity and long-standing benefits, which might create incentives for individuals to claim having been offered to participate in vote buying. Submitting our data to a series of tests of design, we find that over-reporting is strategic: respondents intentionally reveal vote buying and it’s likely that those who reveal vote buying have in fact being offered to participate in vote buying. Considering reasons for over-reporting such as social desirability and network benefits, and the strategic nature of over-reporting, we suggest that “design failure\\\" is not the only possible conclusion from unexpected list experiment results. With this paper we show that our theoretical assumptions about sensitivity bias affect the conclusions we can draw from a list experiment.\",\"PeriodicalId\":49649,\"journal\":{\"name\":\"Quality & Quantity\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quality & Quantity\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11135-023-01740-6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quality & Quantity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11135-023-01740-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
Could vote buying be socially desirable? Exploratory analyses of a ‘failed’ list experiment
Abstract List experiments encourage survey respondents to report sensitive opinions they may prefer not to reveal. But, studies sometimes find that respondents admit more readily to sensitive opinions when asked directly. Often this over-reporting is viewed as a design failure, attributable to inattentiveness or other nonstrategic error. This paper conducts an exploratory analysis of such a ‘failed’ list experiment measuring vote buying in the 2019 Nigerian presidential election. We take this opportunity to explore our assumptions about vote buying. Although vote buying is illegal and stigmatized in many countries, a significant literature links such exchanges to patron-client networks that are imbued with trust, reciprocity and long-standing benefits, which might create incentives for individuals to claim having been offered to participate in vote buying. Submitting our data to a series of tests of design, we find that over-reporting is strategic: respondents intentionally reveal vote buying and it’s likely that those who reveal vote buying have in fact being offered to participate in vote buying. Considering reasons for over-reporting such as social desirability and network benefits, and the strategic nature of over-reporting, we suggest that “design failure" is not the only possible conclusion from unexpected list experiment results. With this paper we show that our theoretical assumptions about sensitivity bias affect the conclusions we can draw from a list experiment.
期刊介绍:
Quality and Quantity constitutes a point of reference for European and non-European scholars to discuss instruments of methodology for more rigorous scientific results in the social sciences. In the era of biggish data, the journal also provides a publication venue for data scientists who are interested in proposing a new indicator to measure the latent aspects of social, cultural, and political events. Rather than leaning towards one specific methodological school, the journal publishes papers on a mixed method of quantitative and qualitative data. Furthermore, the journal’s key aim is to tackle some methodological pluralism across research cultures. In this context, the journal is open to papers addressing some general logic of empirical research and analysis of the validity and verification of social laws. Thus The journal accepts papers on science metrics and publication ethics and, their related issues affecting methodological practices among researchers.
Quality and Quantity is an interdisciplinary journal which systematically correlates disciplines such as data and information sciences with the other humanities and social sciences. The journal extends discussion of interesting contributions in methodology to scholars worldwide, to promote the scientific development of social research.