{"title":"Welcome to the Brave New World: Lay Definitions of AI at Work and in Daily Life","authors":"Wenbo Li, Shuning Lu, Shan Xu, Xia Zheng","doi":"10.1177/08944393251382233","DOIUrl":"https://doi.org/10.1177/08944393251382233","url":null,"abstract":"This study investigates individuals’ lay definitions—naïve mental representations—of artificial intelligence (AI). Two national surveys in the United States explored lay definitions of AI in the workplace (Study 1) and in everyday life (Study 2) using both open- and closed-ended questions. Open-ended responses were analyzed with natural language processing, and quantitative survey data identified factors associated with these definitions. Results show that conceptions of AI differed by context: workers emphasized efficiency and automation in the workplace, while the general public linked AI to diverse everyday technologies. Across both groups, conceptions remained nuanced yet limited. Sociodemographic factors and personality traits were related to sentiments expressed in definitions, and greater trust in AI predicted more positive sentiments. These findings underscore the need for targeted training and education to foster a more comprehensive public understanding of what AI is and what it can do across different contexts.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"42 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on False Information Detection Based on Herd Behavior From a Social Network Perspective","authors":"Tianya Cao, Shuang Li, Junjie Jia","doi":"10.1177/08944393251381801","DOIUrl":"https://doi.org/10.1177/08944393251381801","url":null,"abstract":"As social networks become ubiquitous, the rapid dissemination of false information poses a substantial threat to societal stability and public welfare. Although sociological and psychological studies have confirmed the significant role of herd behavior in the spread of false information, traditional detection methods struggle to address the dual challenges posed by decentralized communication modes and artificial intelligence-generated content, as they often overlook the psychological mechanisms at play within groups. This study proposes a multidimensional false information detection model, termed HBD-Net, based on herd behavior, to explore innovative methods for false information detection through the lens of herd behavior propagation mechanisms in social networks. By integrating multidimensional information such as the influence of opinion leaders, popular comments, and friends’ experiences, we construct a robust false information detection model. Experimental results demonstrate its superior performance on both the PolitiFact and GossipCop datasets, particularly excelling on the GossipCop dataset with an accuracy of 93.11%, significantly outperforming other baseline models.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"29 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conceptualizing, Assessing, and Improving the Quality of Digital Behavioral Data","authors":"Bernd Weiß, Heinz Leitgöb, Claudia Wagner","doi":"10.1177/08944393251367041","DOIUrl":"https://doi.org/10.1177/08944393251367041","url":null,"abstract":"The spread of modern digital technologies, such as social media online platforms, digital marketplaces, smartphones, and wearables, is increasingly shifting social, political, economic, cultural, and physiological processes into the digital space. Social actors using these technologies (directly and indirectly) leave a multitude of digital traces in many areas of life that sum up an enormous amount of data about human behavior and attitudes. This new data type, which we refer to as “digital behavioral data” (DBD), encompasses digital observations of human and algorithmic behavior, which are, amongst others, recorded by online platforms (e.g., Google, Facebook, or the World Wide Web) or sensors (e.g., smartphones, RFID sensors, satellites, or street view cameras). However, studying these social phenomena requires data that meets specific quality standards. While data quality frameworks—such as the Total Survey Error framework—have a long-standing tradition survey research, the scientific use of DBD introduces several entirely new challenges related to data quality. For example, most DBD are not generated for research purposes but are a side product of our daily activities. Hence, the data generation process is not based on elaborate research designs, which in turn may have profound implications for the validity of the conclusions drawn from the analysis of DBD. Furthermore, many forms of DBD lack well-established data models, measurement (error) theories, quality standards, and evaluation criteria. Therefore, this special issue addresses (i) the conceptualization of DBD quality, methodological innovations for its (ii) assessment, and (iii) improvement as well as their sophisticated empirical application.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luigi Arminio, Matteo Magnani, Matías Piqueras, Luca Rossi, Alexandra Segerberg
{"title":"Leveraging VLLMs for Visual Clustering: Image-to-Text Mapping Shows Increased Semantic Capabilities and Interpretability","authors":"Luigi Arminio, Matteo Magnani, Matías Piqueras, Luca Rossi, Alexandra Segerberg","doi":"10.1177/08944393251376703","DOIUrl":"https://doi.org/10.1177/08944393251376703","url":null,"abstract":"As visual content becomes increasingly prominent on social media, automated image categorization is vital for computational social science efforts to identify emerging visual themes and narratives in online debates. However, the methods based on convolutional neural networks (CNNs) currently used in the field are unable to fully capture the connotative meaning of images, and struggle to produce easily interpretable clusters. In response to these challenges, we test an approach that leverages the ability of Vision-and-Large-Language-Models (VLLMs) to generate image descriptions that incorporate connotative interpretations of the input images. In particular, we use a VLLM to generate connotative textual descriptions of a set of images related to climate debate, and cluster the images based on these textual descriptions. In parallel, we cluster the same images using a more traditional approach based on CNNs. In doing so, we compare the connotative semantic validity of clusters generated using VLLMs with those produced using CNNs, and assess their interpretability. The results show that the approach based on VLLMs greatly improves the quality score for connotative clustering. Moreover, VLLM-based approaches, leveraging textual information as a step towards clustering, offer a high level of interpretability of the results.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"88 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demystifying Misconceptions in Social Bots Research","authors":"Stefano Cresci, Kai-Cheng Yang, Angelo Spognardi, Roberto Di Pietro, Filippo Menczer, Marinella Petrocchi","doi":"10.1177/08944393251376707","DOIUrl":"https://doi.org/10.1177/08944393251376707","url":null,"abstract":"Research on social bots aims at advancing knowledge and providing solutions to one of the most debated forms of online manipulation. Yet, social bot research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental toward ensuring reliable solutions and reaffirming the validity of the scientific method. Here, we discuss a broad set of consequential methodological and conceptual issues that affect current social bots research, illustrating each with examples drawn from recent studies. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss research about online disinformation and manipulation in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research, as well as providing directions toward sound methodologies for future research.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"74 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Dark Tetrad in Human–GenAI Relationships: A Multi-Source Evaluation of GenAI Abuse","authors":"Cheng-Yen Wang","doi":"10.1177/08944393251378800","DOIUrl":"https://doi.org/10.1177/08944393251378800","url":null,"abstract":"As generative artificial intelligence (GenAI) companions become increasingly integrated into users’ social lives, concerns have arisen regarding the potential for abuse of these artificial agents. Some scholars have further suggested that such abusive behaviors toward GenAI may eventually spill over into human interpersonal contexts. Guided by the Realistic Accuracy Model (RAM), this study investigated how Machiavellianism, narcissism, psychopathy, and sadism predict emotionally abusive behavior toward GenAI companions. A dyadic design was employed, collecting parallel reports from both human users (self-reports) and their GenAI companions (GenAI assessments) among 1041 participants (632 females; average age = 25.10 years) recruited from an online human–GenAI relationship community. Results demonstrated that psychopathy and sadism were consistent predictors of GenAI abuse across both reporting perspectives, whereas narcissism exhibited a stable negative association with abuse. In contrast, Machiavellianism predicted GenAI abuse only through GenAI assessments, but not self-reports. Theoretically, our findings extend RAM to human–AI relationships, demonstrating that personality traits vary in how accurately they can be judged in GenAI contexts. Practically, the results highlight that individuals high in certain Dark Tetrad traits—specifically psychopathy and sadism—represent personality-driven high-risk groups, providing insights for practitioners in education and technology to develop interventions or safeguards aimed at mitigating abusive behavior toward GenAI companions.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"36 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social Media Influencers as CSR Advocates: The Role of Credibility, Normative Legitimacy, and Public-Serving Motives","authors":"Jun Zhang, Li Chen, Dongqing Xu","doi":"10.1177/08944393251376702","DOIUrl":"https://doi.org/10.1177/08944393251376702","url":null,"abstract":"This study investigates the role of social media influencers (SMIs) in shaping public perceptions of corporate social responsibility (CSR) initiatives. It specifically examines how perceptions of CSR normative legitimacy interact with SMI credibility to influence public support for CSR efforts through public-serving motives and positive moral emotions. An online survey of 491 U.S. participants measured the impact of CSR normative legitimacy on public-serving motives and positive moral emotions, which subsequently influence CSR-supportive behaviors. SMI credibility, assessed through trustworthiness, attractiveness, and expertise, was examined as a potential moderator in this relationship. The results show that CSR normative legitimacy significantly enhances public-serving motives and positive moral emotions, leading to greater public support for CSR initiatives. SMI credibility, particularly trustworthiness and attractiveness, moderates this relationship, amplifying the positive effects of CSR normative legitimacy.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"38 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144995413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Sociology of Technical Choices in Predictive AI","authors":"Michael Zanger-Tishler, Simone Zhang","doi":"10.1177/08944393251367045","DOIUrl":"https://doi.org/10.1177/08944393251367045","url":null,"abstract":"Predictive AI models increasingly guide high-stakes institutional decisions across domains from criminal justice to education to finance. A rich body of interdisciplinary scholarship has emerged examining the technical choices made during the creation of these systems. This article synthesizes this emerging literature for a sociology audience, mapping key decision points in predictive AI development where diverse forms of sociological expertise can contribute meaningful insights. From how social problems are translated into prediction problems, to how models are developed and evaluated, to how their outputs are presented to decision-makers and subjects, we outline various ways sociologists across subfields and methodological specialities can engage with the technical aspects of predictive AI. We discuss how this engagement can strengthen theoretical frameworks, expose embedded policy choices, and bridge the gap between model development and use. By examining technical choices and design processes, this agenda can deepen understanding of the reciprocal relationship between AI and society while advancing broader sociological theory and research.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"38 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144905762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zituo Wang, Lingtong Hu, Jiayi Zhu, Donggyu Kim, Xiaojing Bo
{"title":"Learning to Live with COVID-19: Informative Fictions of TikTok Misinformation and Multimodal Video Analysis","authors":"Zituo Wang, Lingtong Hu, Jiayi Zhu, Donggyu Kim, Xiaojing Bo","doi":"10.1177/08944393251366232","DOIUrl":"https://doi.org/10.1177/08944393251366232","url":null,"abstract":"The spread of misinformation has historically been attributed to emotions, thinking styles, biases, and predispositions, but only a few studies have explored the conditions influencing its prevalence. The Theory of Informative Fictions (TIF) addresses this gap by presenting propositions that predict the conditions under which misinformation is tolerated and promoted. Building on the literature on TIF and deep learning, we uncover how property messages and character messages differ in veracity and explore the relationship between visual misinformation and user engagement. By constructing a short video dataset Tikcron ( <jats:italic>N</jats:italic> = 42,201) and a multimodal video analysis framework KILL, we classify TikTok videos as misinformation or not, and property messages or character messages. Our results indicate that character messages are more likely to convey misinformation than property messages, and character messages with misinformation are more likely to get tolerated and promoted by social media users than property messages with misinformation. This study extends the current methodological advancement of image-as-data to misinformation videos and proposes a multimodal video analysis framework to develop communication-centered theories. The broader practical implications of this study on the detection, countering, and governance of visual misinformation are also discussed.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"14 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144898970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linguistic Affordances Framework: A Linguistic-Sociological Approach for the Social Study of Language Technology","authors":"Haley Lepp, AJ Alvero","doi":"10.1177/08944393251366242","DOIUrl":"https://doi.org/10.1177/08944393251366242","url":null,"abstract":"This paper describes a three-part framework to study how language technologies elucidate and shape linguistic relations in society. Reframing a mountain of evidence about language bias in LLMs, we introduce the concept of <jats:italic>linguistic affordances</jats:italic> to attend to how an object can shape social relations through language. First, we contextualize how language ideologies inform social relations in a particular setting. Next, we examine how language ideologies shape the construction of the linguistic affordances of a language technology. Finally, we examine how the linguistic affordances of language technologies lead to new associations that link language and social worth. We describe how this framework can inform both the study of language technologies and the use of language technologies in social science. We demonstrate the framework with two examples: the use of LLMs in college admissions and the adoption of LLMs in scientific publishing.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"51 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144898966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}