Katie Corbitt, Karen Hiltbrand, Madison Coursen, Soren Rodning, W. Brandon Smith, Don Mulvaney
{"title":"Credibility Judgments in Higher Education: A Mixed-Methods Approach to Detecting Misinformation from University Instructors","authors":"Katie Corbitt, Karen Hiltbrand, Madison Coursen, Soren Rodning, W. Brandon Smith, Don Mulvaney","doi":"10.3390/educsci14080852","DOIUrl":null,"url":null,"abstract":"Given the convenience with which information can now be acquired, it is crucial to analyze cases of potential misinformation and disinformation in postsecondary education. Instructor credibility judgments were measured using descriptive survey research, and the main objective was to investigate trends related to misinformation, credibility, trust, bias, and others in graduate students and on a graduate program basis. Participants were surveyed from a land grant institution in the southeast United States where 186 graduate students completed an electronic survey on the detection of misinformation and similar experiences. Graduate students were divided based on graduate program into STEM (sciences, technology, engineering, and mathematics) and non-STEM groups. Quantitative methodologies included validated questionnaires developed by researchers containing Likert-type scale questions. Chi-square tests of independence and frequencies served as primary analyses. Participants in both STEM and non-STEM groups detected the following: misinformation, bias, challenges, intimidation, risk of measurable consequences, pressure to conform, and skepticism from post-secondary instructors. There were significant differences between the type of student for trust in claims (p < 0.05), while the perception of potential consequences tended to be different between the types of graduate students (0.05 < p < 0.10). Participants in both STEM and non-STEM groups reported perception bias in science material presentation, with STEM students reporting less bias. Qualitative methodologies included optional open response boxes to provide supporting details or narratives. Reliable and validated thematic coding following served as the primary analysis. Students disciplined in STEM and non-STEM faced misinformation, bias, challenges, intimidation, risk of measurable consequences, pressure to conform, and skepticism from post-secondary instructors. Graduate students reported consistent instances of misinformation and bias about science and agriculture topics in both science and non-science-focused classrooms.","PeriodicalId":11472,"journal":{"name":"Education Sciences","volume":"41 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Education Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/educsci14080852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Given the convenience with which information can now be acquired, it is crucial to analyze cases of potential misinformation and disinformation in postsecondary education. Instructor credibility judgments were measured using descriptive survey research, and the main objective was to investigate trends related to misinformation, credibility, trust, bias, and others in graduate students and on a graduate program basis. Participants were surveyed from a land grant institution in the southeast United States where 186 graduate students completed an electronic survey on the detection of misinformation and similar experiences. Graduate students were divided based on graduate program into STEM (sciences, technology, engineering, and mathematics) and non-STEM groups. Quantitative methodologies included validated questionnaires developed by researchers containing Likert-type scale questions. Chi-square tests of independence and frequencies served as primary analyses. Participants in both STEM and non-STEM groups detected the following: misinformation, bias, challenges, intimidation, risk of measurable consequences, pressure to conform, and skepticism from post-secondary instructors. There were significant differences between the type of student for trust in claims (p < 0.05), while the perception of potential consequences tended to be different between the types of graduate students (0.05 < p < 0.10). Participants in both STEM and non-STEM groups reported perception bias in science material presentation, with STEM students reporting less bias. Qualitative methodologies included optional open response boxes to provide supporting details or narratives. Reliable and validated thematic coding following served as the primary analysis. Students disciplined in STEM and non-STEM faced misinformation, bias, challenges, intimidation, risk of measurable consequences, pressure to conform, and skepticism from post-secondary instructors. Graduate students reported consistent instances of misinformation and bias about science and agriculture topics in both science and non-science-focused classrooms.