{"title":"Supplemental Material for Quick and Dirty: An Evaluation of Plea Colloquy Validity in the Virtual Courtroom","authors":"","doi":"10.1037/lhb0000619.supp","DOIUrl":"https://doi.org/10.1037/lhb0000619.supp","url":null,"abstract":"","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"18 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for What Do People Want From Algorithms? Public Perceptions of Algorithms in Government","authors":"","doi":"10.1037/lhb0000614.supp","DOIUrl":"https://doi.org/10.1037/lhb0000614.supp","url":null,"abstract":"","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"51 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Megan L Lawrence, Kristen L Gittings, Valerie P Hans, John C Campbell, Jessica M Salerno
{"title":"The effects of implicit bias interventions on mock jurors' civil trial decisions and perceptions of the courts.","authors":"Megan L Lawrence, Kristen L Gittings, Valerie P Hans, John C Campbell, Jessica M Salerno","doi":"10.1037/lhb0000610","DOIUrl":"10.1037/lhb0000610","url":null,"abstract":"<p><strong>Objective: </strong>In an attempt to reduce juror bias, courts across the United States are educating jurors about how implicit bias impacts decision making. We tested whether novel implicit bias interventions-in the form of educational videos or judicial instructions-reduce the relationship between mock jurors' explicit racial biases and their case decisions for Black plaintiffs and/or increase mock jurors' trust in the courts to deliver fair outcomes.</p><p><strong>Hypotheses: </strong>We predicted that mock jurors' increased explicit racial biases would predict less favorable case outcomes for Black plaintiffs but not for White plaintiffs (Studies 1 and 2). We presented competing hypotheses about whether an implicit bias intervention would mitigate, exacerbate, or have no effect on this relationship and explored whether they improved mock jurors' trust in the courts' ability to produce fair outcomes (Study 2).</p><p><strong>Method: </strong>In Study 1 (<i>N</i> = 407) and Study 2 (<i>N</i> = 1,016), White mock jurors were randomly assigned to judge a civil case with a Black or White plaintiff and then completed measures capturing their implicit and explicit racial biases. In Study 2, mock jurors were also randomly assigned to watch an implicit bias educational video, watch a video of a judge delivering implicit bias instructions, or neither (i.e., control condition).</p><p><strong>Results: </strong>As hypothesized, mock jurors' increased explicit racial biases predicted less favorable verdicts for Black plaintiffs but not for White plaintiffs. Implicit bias judicial instructions increased pro-plaintiff verdicts and mock jurors' trust in the courts in cases with Black plaintiffs. However, we did not find evidence that educational videos impacted these outcomes, which warrants further study. Neither intervention reduced the relationship between explicit racial bias and verdicts for Black plaintiffs.</p><p><strong>Conclusions: </strong>Anti-bias judicial instructions might hold some promise but need further testing; implicit bias videos had no impact. In the meantime, court systems must explore additional remedies to achieve an impartial jury. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":" ","pages":"186-205"},"PeriodicalIF":2.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144182199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miko M Wilford, Annabelle Frazier, Ariana Lowe, Peyton Newsome, Hannah V Strong
{"title":"Quick and dirty: An evaluation of plea colloquy validity in the virtual courtroom.","authors":"Miko M Wilford, Annabelle Frazier, Ariana Lowe, Peyton Newsome, Hannah V Strong","doi":"10.1037/lhb0000619","DOIUrl":"10.1037/lhb0000619","url":null,"abstract":"<p><strong>Objective: </strong>Court proceedings, particularly after the COVID-19 pandemic, have increasingly occurred outside the courtroom. Yet little research has examined the format and content of virtual hearings, particularly those that result in a criminal conviction. We compiled a sample of recorded plea hearings (colloquies) to examine how this virtual format might impact the validity of defendant decisions.</p><p><strong>Hypotheses: </strong>Given the exploratory nature of this research, we had no a priori hypotheses.</p><p><strong>Method: </strong>We searched YouTube for judicial channels to secure recordings of virtual hearings. An initial sample of 340 recordings was obtained; upon further review, 106 recordings were excluded because the most serious initial and final charges were noncriminal civil infractions (providing a study sample of 234). Each hearing was reviewed for variables relating to the characteristics of the hearing (e.g., duration, crime type) and content included (e.g., plea validity assessments-knowingness, intelligence, and voluntariness).</p><p><strong>Results: </strong>Virtual plea colloquies averaged only 3.88 min in length and were often characterized by few efforts to assess their validity. Judges explicitly inquired about the knowingness, intelligence, and voluntariness of each plea relatively infrequently. We also observed great variability in the frequency with which prosecutors, defense attorneys, and even defendants were visible during the proceedings (i.e., had their cameras on). Further, online-related difficulties (e.g., audio disruptions) occurred regularly, yet these disruptions were not associated with longer hearings.</p><p><strong>Conclusions: </strong>The current research indicates that online plea colloquies are at least as efficient as their in-person counterparts (in terms of average duration), despite added obstacles to their flow (e.g., technological issues). In addition, our findings indicate little consistency in how plea knowingness, intelligence, and voluntariness are ensured virtually, with significant variation observed across judges. Further research is needed to determine the generalizability of these findings and to examine guidelines that could reduce the costs associated with virtual hearings (e.g., soundchecks). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":" ","pages":"311-322"},"PeriodicalIF":2.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What do people want from algorithms? Public perceptions of algorithms in government.","authors":"Amit Haim, Dvir Yogev","doi":"10.1037/lhb0000614","DOIUrl":"10.1037/lhb0000614","url":null,"abstract":"<p><strong>Objective: </strong>This study examined how specific attributes of algorithmic decision-making tools (ADTs), related to algorithm design and institutional governance, affect the public's perceptions of implementing ADTs in government programs.</p><p><strong>Hypotheses: </strong>We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, and government in-house development will enhance acceptability. Institutional features were also expected to shape perceptions: Explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.</p><p><strong>Method: </strong>This study employed a conjoint experimental design with 1,213 U.S. adults. Participants evaluated five policy proposals, each featuring a proposal to implement an ADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT's acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal component effects of ADT attributes.</p><p><strong>Results: </strong>A combination of attributes related to process individualization significantly enhanced the perceived acceptability of the use of algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notices, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, whereas accuracy mattered most for efficiency perceptions.</p><p><strong>Conclusion: </strong>Explaining decisions made using an algorithm, giving appropriate notice, providing a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":" ","pages":"263-280"},"PeriodicalIF":2.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Garrett, Christopher M King, David DeMatteo
{"title":"Special issue on justice, legitimacy, and technology.","authors":"Brandon Garrett, Christopher M King, David DeMatteo","doi":"10.1037/lhb0000623","DOIUrl":"https://doi.org/10.1037/lhb0000623","url":null,"abstract":"<p><p>This special issue explores the intersection of justice, legitimacy, and technology to illuminate connections among these inter-related concepts and provide much-needed data that have the potential to inform governmental actors and institutions. This Introduction begins with a discussion of the motivating influences and goals for the special issue, followed by a summary of the articles we selected for inclusion. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"49 3","pages":"183-185"},"PeriodicalIF":2.4,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for The Effects of Implicit Bias Interventions on Mock Jurors’ Civil Trial Decisions and Perceptions of the Courts","authors":"","doi":"10.1037/lhb0000610.supp","DOIUrl":"https://doi.org/10.1037/lhb0000610.supp","url":null,"abstract":"","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"26 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How chatbot communication styles impact citizen reports to police: Testing procedural justice and overaccommodation approaches in a survey experiment.","authors":"Callie Vitro,Erin M Kearns,Joel S Elson","doi":"10.1037/lhb0000613","DOIUrl":"https://doi.org/10.1037/lhb0000613","url":null,"abstract":"OBJECTIVEWe developed and tested a chatbot for reporting information to police. We examined how chatbot communication styles impacted three outcomes: (a) report accuracy, (b) willingness to provide contact information, and (c) user trust in the chatbot system.HYPOTHESESIn police-citizen interactions, people respond more positively when police officers use a combination of power and solidarity in their communication. We expected that this would hold for citizen-reporting chatbot interactions.METHODWe conducted an online survey experiment with 950 U.S. adults who approximated the population on key demographics. Participants watched a video of a suspicious scenario and reported the incident to a chatbot. We manipulated and programmed the communication style of a generative pre-trained transformer chatbot to include elements of the power-solidarity framework from linguistics to create a 2 (power: low vs. high) × 2 (solidarity: low vs. high) design. We then compared three outcomes across conditions.RESULTSThe high power-high solidarity condition yielded the most positive responses. Relative to high power-high solidarity reports, low power-low solidarity reports were less accurate about the individual involved. Trust in the chatbot and willingness to provide contact information did not vary across conditions.CONCLUSIONFindings contributed to criminological, linguistic, and information technology literatures to show how communication styles impact user responses to and perceptions of a chatbot for reporting to police. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"121 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144087832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for How Chatbot Communication Styles Impact Citizen Reports to Police: Testing Procedural Justice and Overaccommodation Approaches in a Survey Experiment","authors":"","doi":"10.1037/lhb0000613.supp","DOIUrl":"https://doi.org/10.1037/lhb0000613.supp","url":null,"abstract":"","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne A A Janssen,Kees van den Bos,Kim G F van der Kraats
{"title":"\"I do not have an opinion about that yet\": Qualitative research on perceived procedural justice of self-represented litigants in early stages of small claims procedures in the Netherlands.","authors":"Anne A A Janssen,Kees van den Bos,Kim G F van der Kraats","doi":"10.1037/lhb0000612","DOIUrl":"https://doi.org/10.1037/lhb0000612","url":null,"abstract":"OBJECTIVEBuilding on recent suggestions that there are, thus far, unnoticed levels of increased polarization and decreased perceived legitimacy of the judiciary within the Netherlands, we studied the experiences of self-represented litigants in early stages of Dutch small claims procedures. Our objective was to assess by means of qualitative interviews (a) whether litigants would mention experiences of perceived procedural justice during these court procedures and, (b) if so, what elements of perceived procedural justice they would mention, (c) how they form judgments of trust in judges, and (d) whether interviewees would mention spontaneously that in these early stages of court procedures, with limited information available, they do not know (yet) whether they perceive a judge as fair or can trust a judge handling their case.RESEARCH QUESTIONWhat role, if any, do judgments of procedural justice, trust in judges, and informational uncertainty play in early stages of civil procedures?METHODWe held 115 interviews with self-represented litigants about their experiences with prehearings in Dutch small claims procedures. We asked respondents in various ways about procedural justice and trust in judges. We coded whether litigants mentioned spontaneously that they did not have enough information to answer these questions.RESULTSRespondents mentioned procedural fairness perceptions spontaneously when asked directly about fair treatment and when interviewed about specific procedural justice components. Interestingly, almost half of the respondents indicated that they did not have an opinion about at least one procedural justice component. When asked about trust in judges, various respondents also indicated that they did not have an opinion yet.CONCLUSIONSThese results suggest that (a) perceived procedural justice matters to self-represented litigants in civil procedures, and (b) in early stages of court procedures, people may not know whether they perceive a judge as fair or can trust judges and may indicate this spontaneously in interviews. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":"52 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144087833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}