Jean-Charles Pillet, Kevin D. Carillo, Claudio Vitari, Federico Pigni
{"title":"改进信息系统研究中的量表适应实践:认知有效性评估方法的开发和验证","authors":"Jean-Charles Pillet, Kevin D. Carillo, Claudio Vitari, Federico Pigni","doi":"10.1111/isj.12428","DOIUrl":null,"url":null,"abstract":"<p>Scale adaptation, where authors alter the wording of an already published scale, is a deeply rooted social practice in IS research. This paper argues that the time is ripe to question this activity as well as the beliefs that have progressively formed around it. We identify and challenge five fallacious scale adaptation beliefs that hinder the development of more robust measure development norms. Contributing to this area of research, this paper offers a conceptual definition of the cognitive validity concept, defined as the extent to which a scale is free of problematic item characteristics (PICs) that bias the survey response process and subsequent empirical results. Building on this conceptualization effort, a new methodological process for assessing the cognitive validity of adapted IS measures is introduced. Through a series of three programmatic studies, we find converging evidence that the method can benefit the IS field by making the scale adaptation process more robust, transparent, and consistent. Along with the method, we introduce a new index that IS scholars can use to benchmark the cognitive quality of their scales against venerable IS measures. We discuss the implications of our work for IS research (including detailed implementation guidelines) and provide directions for future research on measurement in IS.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"33 4","pages":"842-889"},"PeriodicalIF":6.5000,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12428","citationCount":"3","resultStr":"{\"title\":\"Improving scale adaptation practices in information systems research: Development and validation of a cognitive validity assessment method\",\"authors\":\"Jean-Charles Pillet, Kevin D. Carillo, Claudio Vitari, Federico Pigni\",\"doi\":\"10.1111/isj.12428\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Scale adaptation, where authors alter the wording of an already published scale, is a deeply rooted social practice in IS research. This paper argues that the time is ripe to question this activity as well as the beliefs that have progressively formed around it. We identify and challenge five fallacious scale adaptation beliefs that hinder the development of more robust measure development norms. Contributing to this area of research, this paper offers a conceptual definition of the cognitive validity concept, defined as the extent to which a scale is free of problematic item characteristics (PICs) that bias the survey response process and subsequent empirical results. Building on this conceptualization effort, a new methodological process for assessing the cognitive validity of adapted IS measures is introduced. Through a series of three programmatic studies, we find converging evidence that the method can benefit the IS field by making the scale adaptation process more robust, transparent, and consistent. Along with the method, we introduce a new index that IS scholars can use to benchmark the cognitive quality of their scales against venerable IS measures. We discuss the implications of our work for IS research (including detailed implementation guidelines) and provide directions for future research on measurement in IS.</p>\",\"PeriodicalId\":48049,\"journal\":{\"name\":\"Information Systems Journal\",\"volume\":\"33 4\",\"pages\":\"842-889\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12428\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Journal\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/isj.12428\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12428","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Improving scale adaptation practices in information systems research: Development and validation of a cognitive validity assessment method
Scale adaptation, where authors alter the wording of an already published scale, is a deeply rooted social practice in IS research. This paper argues that the time is ripe to question this activity as well as the beliefs that have progressively formed around it. We identify and challenge five fallacious scale adaptation beliefs that hinder the development of more robust measure development norms. Contributing to this area of research, this paper offers a conceptual definition of the cognitive validity concept, defined as the extent to which a scale is free of problematic item characteristics (PICs) that bias the survey response process and subsequent empirical results. Building on this conceptualization effort, a new methodological process for assessing the cognitive validity of adapted IS measures is introduced. Through a series of three programmatic studies, we find converging evidence that the method can benefit the IS field by making the scale adaptation process more robust, transparent, and consistent. Along with the method, we introduce a new index that IS scholars can use to benchmark the cognitive quality of their scales against venerable IS measures. We discuss the implications of our work for IS research (including detailed implementation guidelines) and provide directions for future research on measurement in IS.
期刊介绍:
The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.