Mike Vuolo, Sadé L. Lindsay, Vincent J. Roscigno, Shawn D. Bushway
{"title":"The Unrealized Potential of Audits: Applicant-Side Inequalities in Effort, Opportunities, and Certainty","authors":"Mike Vuolo, Sadé L. Lindsay, Vincent J. Roscigno, Shawn D. Bushway","doi":"10.1177/00491241251338240","DOIUrl":"https://doi.org/10.1177/00491241251338240","url":null,"abstract":"Randomized audits and correspondence studies are widely regarded as a “gold standard” for capturing discrimination and bias. However, gatekeepers (e.g., employers) are the analytic unit even though stated implications often center on group-level inequalities. Employing simple rules, we show that audits have the potential to uncover applicant-side inequalities and burdens beyond the gatekeeper biases standardly reported. Specifically, applicants from groups facing lower callback rates must submit more applications to ensure an eventual callback, have fewer opportunities to choose from, and face higher uncertainty regarding how many applications to submit. These results reflect several sequential and cumulative stratification processes “real-world” applicants face that warrant attention in conventional audit reporting. Our approach can be straightforwardly applied and, we show, is particularly pertinent for employment relative to other institutional domains (e.g., education, religion). We discuss the methodological and theoretical relevance of our suggested extensions and the implications for the study of inequality, discrimination, and social closure.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"57 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nga Than, Leanne Fan, Tina Law, Laura K. Nelson, Leslie McCall
{"title":"Updating “The Future of Coding”: Qualitative Coding with Generative Large Language Models","authors":"Nga Than, Leanne Fan, Tina Law, Laura K. Nelson, Leslie McCall","doi":"10.1177/00491241251339188","DOIUrl":"https://doi.org/10.1177/00491241251339188","url":null,"abstract":"Over the past decade, social scientists have adapted computational methods for qualitative text analysis, with the hope that they can match the accuracy and reliability of hand coding. The emergence of GPT and open-source generative large language models (LLMs) has transformed this process by shifting from programming to engaging with models using natural language, potentially mimicking the in-depth, inductive, and/or iterative process of qualitative analysis. We test the ability of generative LLMs to replicate and augment traditional qualitative coding, experimenting with multiple prompt structures across four closed- and open-source generative LLMs and proposing a workflow for conducting qualitative coding with generative LLMs. We find that LLMs can perform nearly as well as prior supervised machine learning models in accurately matching hand-coding output. Moreover, using generative LLMs as a natural language interlocutor closely replicates traditional qualitative methods, indicating their potential to transform the qualitative research process, despite ongoing challenges.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"11 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Lyman, Bryce Hepner, Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, David Wingate
{"title":"Balancing Large Language Model Alignment and Algorithmic Fidelity in Social Science Research","authors":"Alex Lyman, Bryce Hepner, Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, David Wingate","doi":"10.1177/00491241251342008","DOIUrl":"https://doi.org/10.1177/00491241251342008","url":null,"abstract":"Generative artificial intelligence (AI) has the potential to revolutionize social science research. However, researchers face the difficult challenge of choosing a specific AI model, often without social science-specific guidance. To demonstrate the importance of this choice, we present an evaluation of the effect of alignment, or human-driven modification, on the ability of large language models (LLMs) to simulate the attitudes of human populations (sometimes called <jats:italic>silicon sampling</jats:italic> ). We benchmark aligned and unaligned versions of six open-source LLMs against each other and compare them to similar responses by humans. Our results suggest that model alignment impacts output in predictable ways, with implications for prompting, task completion, and the substantive content of LLM-based results. We conclude that researchers must be aware of the complex ways in which model training affects their research and carefully consider model choice for each project. We discuss future steps to improve how social scientists work with generative AI tools.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"16 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative AI Meets Open-Ended Survey Responses: Research Participant Use of AI and Homogenization","authors":"Simone Zhang, Janet Xu, AJ Alvero","doi":"10.1177/00491241251327130","DOIUrl":"https://doi.org/10.1177/00491241251327130","url":null,"abstract":"The growing popularity of generative artificial intelligence (AI) tools presents new challenges for data quality in online surveys and experiments. This study examines participants’ use of large language models to answer open-ended survey questions and describes empirical tendencies in human versus large language model (LLM)-generated text responses. In an original survey of research participants recruited from a popular online platform for sourcing social science research subjects, 34 percent reported using LLMs to help them answer open-ended survey questions. Simulations comparing human-written responses from three pre-ChatGPT studies with LLM-generated text reveal that LLM responses are more homogeneous and positive, particularly when they describe social groups in sensitive questions. These homogenization patterns may mask important underlying social variation in attitudes and beliefs among human subjects, raising concerns about data validity. Our findings shed light on the scope and potential consequences of participants’ LLM use in online research.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"74 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143920428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Generative Artificial Intelligence into Social Science Research: Measurement, Prompting, and Simulation","authors":"Thomas Davidson, Daniel Karell","doi":"10.1177/00491241251339184","DOIUrl":"https://doi.org/10.1177/00491241251339184","url":null,"abstract":"Generative artificial intelligence (AI) offers new capabilities for analyzing data, creating synthetic media, and simulating realistic social interactions. This essay introduces a special issue that examines how these and other affordances of generative AI can advance social science research. We discuss three core themes that appear across the contributed articles: rigorous measurement and validation of AI-generated outputs, optimizing model performance and reproducibility via prompting, and novel uses of AI for the simulation of attitudes and behaviors. We highlight how generative AI enable new methodological innovations that complement and augment existing approaches. This essay and the special issue’s ten articles collectively provide a detailed roadmap for integrating generative AI into social science research in theoretically informed and methodologically rigorous ways. We conclude by reflecting on the implications of the ongoing advances in AI.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"15 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143920458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large Language Models for Text Classification: From Zero-Shot Learning to Instruction-Tuning","authors":"Youngjin Chae, Thomas Davidson","doi":"10.1177/00491241251325243","DOIUrl":"https://doi.org/10.1177/00491241251325243","url":null,"abstract":"Large language models (LLMs) have tremendous potential for social science research as they are trained on vast amounts of text and can generalize to many tasks. We explore the use of LLMs for supervised text classification, specifically the application to stance detection, which involves detecting attitudes and opinions in texts. We examine the performance of these models across different architectures, training regimes, and task specifications. We compare 10 models ranging in size from tens of millions to hundreds of billions of parameters and test four distinct training regimes: Prompt-based zero-shot learning and few-shot learning, fine-tuning, and instruction-tuning, which combines prompting and fine-tuning. The largest, most powerful models generally offer the best predictive performance even with little or no training examples, but fine-tuning smaller models is a competitive solution due to their relatively high accuracy and low cost. Instruction-tuning the latest generative LLMs expands the scope of text classification, enabling applications to more complex tasks than previously feasible. We offer practical recommendations on the use of LLMs for text classification in sociological research and discuss their limitations and challenges. Ultimately, LLMs can make text classification and other text analysis methods more accurate, accessible, and adaptable, opening new possibilities for computational social science.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"72 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143866960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conceptualizing Job and Employment Concepts for Earnings Inequality Estimands With Linked Employer-Employee Data 1","authors":"Donald Tomaskovic-Devey, Chen-Shuo Hong","doi":"10.1177/00491241251334124","DOIUrl":"https://doi.org/10.1177/00491241251334124","url":null,"abstract":"We examine variations in pay gap estimates and inferences associated with distinct conceptualizations of jobs and employment contexts under legal and comparable worth theories of pay bias. We find that job titles produce smaller estimates of within job pay gaps than job groups, but the inferential importance of job concepts differs across organizational, workplace, and job groups within workplace units of observation. Moving from more to less job concept detail, we find almost no inference differences when pay gaps are estimated at the organizational level. Tradeoffs at the workplace and job groups within workplace levels are more common, comprising around 10 percent to 20 percent of observations. A legal theoretical framework leads to fewer empirical estimates of significant pay disparities, while comparable worth estimates suggest higher levels of gender and racial bias at the job and workplace levels. This research has implications for future analyses of linked employer-employee data and for both scientific research and regulatory enforcement of equal opportunity law.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"17 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143866959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John W. Jackson, Yea-Jen Hsu, Raquel C. Greer, Romsai T. Boonyasai, Chanelle J. Howe
{"title":"The Target Study: A Conceptual Model and Framework for Measuring Disparity","authors":"John W. Jackson, Yea-Jen Hsu, Raquel C. Greer, Romsai T. Boonyasai, Chanelle J. Howe","doi":"10.1177/00491241251314037","DOIUrl":"https://doi.org/10.1177/00491241251314037","url":null,"abstract":"We present a conceptual model to measure disparity—the target study—where social groups may be similarly situated (i.e., balanced) on allowable covariates. Our model, based on a sampling design, does not intervene to assign social group membership or alter allowable covariates. To address nonrandom sample selection, we extend our model to generalize or transport disparity or to assess disparity after an intervention on eligibility-related variables that eliminates forms of collider-stratification. To avoid bias from differential timing of enrollment, we aggregate time-specific study results by balancing calendar time of enrollment across social groups. To provide a framework for emulating our model, we discuss study designs, data structures, and G-computation and weighting estimators. We compare our sampling-based model to prominent decomposition-based models used in healthcare and algorithmic fairness. We provide R code for all estimators and apply our methods to measure health system disparities in hypertension control using electronic medical records.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"26 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143863022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Networks Beyond Categories: A Computational Approach to Examining Gender Homophily","authors":"Chen-Shuo Hong","doi":"10.1177/00491241251321152","DOIUrl":"https://doi.org/10.1177/00491241251321152","url":null,"abstract":"Social networks literature has explored homophily, the tendency to associate with similar others, as a critical boundary-making process contributing to segregated networks along the lines of identities. Yet, social network research generally conceptualizes identities as sociodemographic categories and seldom considers the inherently continuous and heterogeneous nature of differences. Drawing upon the infracategorical model of inequality, this study demonstrates that a computational approach – combining machine learning and exponential random graph models (ERGMs) – can capture the role of categorical conformity in network structures. Through a case study of gender segregation in friendships, this study presents a workflow for developing a machine-learning-based gender conformity measure and applying it to guide the social network analysis of cultural matching. Results show that adolescents with similar gender conformity are more likely to form friendships, net of homophily based on categorical gender and other controls, and homophily by gender conformity mediates homophily by categorical gender. The study concludes by discussing the limitations of this computational approach and its unique strengths in enhancing theories on categories, boundaries, and stratification.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"32 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143862884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Mixed Subjects Design: Treating Large Language Models as Potentially Informative Observations","authors":"David Broska, Michael Howes, Austin van Loon","doi":"10.1177/00491241251326865","DOIUrl":"https://doi.org/10.1177/00491241251326865","url":null,"abstract":"Large language models (LLMs) provide cost-effective but possibly inaccurate predictions of human behavior. Despite growing evidence that predicted and observed behavior are often not <jats:italic>interchangeable</jats:italic> , there is limited guidance on using LLMs to obtain valid estimates of causal effects and other parameters. We argue that LLM predictions should be treated as potentially informative observations, while human subjects serve as a gold standard in a <jats:italic>mixed subjects design</jats:italic> . This paradigm preserves validity and offers more precise estimates at a lower cost than experiments relying exclusively on human subjects. We demonstrate—and extend—prediction-powered inference (PPI), a method that combines predictions and observations. We define the <jats:italic>PPI correlation</jats:italic> as a measure of interchangeability and derive the <jats:italic>effective sample size</jats:italic> for PPI. We also introduce a power analysis to optimally choose between <jats:italic>informative but costly</jats:italic> human subjects and <jats:italic>less informative but cheap</jats:italic> predictions of human behavior. Mixed subjects designs could enhance scientific productivity and reduce inequality in access to costly evidence.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"4 1","pages":""},"PeriodicalIF":6.3,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143862886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}