{"title":"PIAAC 扫盲中的性别差异:评估特征是否重要?","authors":"Ai Miyamoto, Britta Gauly, Anouk Zabal","doi":"10.1186/s40536-024-00208-9","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Background</h3><p>Previous research based on large-scale studies consistently suggests that on average, male students tend to have lower literacy compared to their female students during secondary schooling. However, this gender gap in literacy seems to “disappear” during adulthood. Up until today, only a few studies investigated the role of assessment features in gender differences in literacy performance in adulthood. This study aims to understand the relationship between assessment features and gender differences in literacy skills.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Using the German 2012 PIAAC data (N = 4,512), we applied item-level analyses using linear probability models to examine gender differences in the probability of solving a literacy item correctly with six assessment features including (1) text format, (2) text topics, (3) text length, (4) cognitive strategies, (5) numerical content of the text/questions, and (6) gender typicality of content.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We found that men had a 13.4% higher probability of solving items with a noncontinuous text format correctly than women. Men also had a 9.4% higher probability of solving short text items correctly and a 4.6% higher probability of solving items with a medium/high numerical content in the question correctly than women. There were small to negligible gender differences in literacy performance in terms of text topics, cognitive strategies, and gender typicality of content.</p><h3 data-test=\"abstract-sub-heading\">Conclusions</h3><p>Our findings highlight the role of text format, text length, and numerical content in gender differences in literacy skills, suggesting that further refining these practices can enhance the fairness and accuracy of literacy assessments. Specifically, we advocate for ongoing research aimed at understanding and minimizing the potential bias introduced by these assessment features. Such efforts are not only crucial for developing instruments that accurately measure literacy skills, but they also yield insights that hold significant implications for educational researchers and practitioners dedicated to creating more equitable assessment environments.</p>","PeriodicalId":37009,"journal":{"name":"Large-Scale Assessments in Education","volume":"1 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gender differences in literacy in PIAAC: do assessment features matter?\",\"authors\":\"Ai Miyamoto, Britta Gauly, Anouk Zabal\",\"doi\":\"10.1186/s40536-024-00208-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Background</h3><p>Previous research based on large-scale studies consistently suggests that on average, male students tend to have lower literacy compared to their female students during secondary schooling. However, this gender gap in literacy seems to “disappear” during adulthood. Up until today, only a few studies investigated the role of assessment features in gender differences in literacy performance in adulthood. This study aims to understand the relationship between assessment features and gender differences in literacy skills.</p><h3 data-test=\\\"abstract-sub-heading\\\">Methods</h3><p>Using the German 2012 PIAAC data (N = 4,512), we applied item-level analyses using linear probability models to examine gender differences in the probability of solving a literacy item correctly with six assessment features including (1) text format, (2) text topics, (3) text length, (4) cognitive strategies, (5) numerical content of the text/questions, and (6) gender typicality of content.</p><h3 data-test=\\\"abstract-sub-heading\\\">Results</h3><p>We found that men had a 13.4% higher probability of solving items with a noncontinuous text format correctly than women. Men also had a 9.4% higher probability of solving short text items correctly and a 4.6% higher probability of solving items with a medium/high numerical content in the question correctly than women. There were small to negligible gender differences in literacy performance in terms of text topics, cognitive strategies, and gender typicality of content.</p><h3 data-test=\\\"abstract-sub-heading\\\">Conclusions</h3><p>Our findings highlight the role of text format, text length, and numerical content in gender differences in literacy skills, suggesting that further refining these practices can enhance the fairness and accuracy of literacy assessments. Specifically, we advocate for ongoing research aimed at understanding and minimizing the potential bias introduced by these assessment features. Such efforts are not only crucial for developing instruments that accurately measure literacy skills, but they also yield insights that hold significant implications for educational researchers and practitioners dedicated to creating more equitable assessment environments.</p>\",\"PeriodicalId\":37009,\"journal\":{\"name\":\"Large-Scale Assessments in Education\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Large-Scale Assessments in Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s40536-024-00208-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Large-Scale Assessments in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40536-024-00208-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Gender differences in literacy in PIAAC: do assessment features matter?
Background
Previous research based on large-scale studies consistently suggests that on average, male students tend to have lower literacy compared to their female students during secondary schooling. However, this gender gap in literacy seems to “disappear” during adulthood. Up until today, only a few studies investigated the role of assessment features in gender differences in literacy performance in adulthood. This study aims to understand the relationship between assessment features and gender differences in literacy skills.
Methods
Using the German 2012 PIAAC data (N = 4,512), we applied item-level analyses using linear probability models to examine gender differences in the probability of solving a literacy item correctly with six assessment features including (1) text format, (2) text topics, (3) text length, (4) cognitive strategies, (5) numerical content of the text/questions, and (6) gender typicality of content.
Results
We found that men had a 13.4% higher probability of solving items with a noncontinuous text format correctly than women. Men also had a 9.4% higher probability of solving short text items correctly and a 4.6% higher probability of solving items with a medium/high numerical content in the question correctly than women. There were small to negligible gender differences in literacy performance in terms of text topics, cognitive strategies, and gender typicality of content.
Conclusions
Our findings highlight the role of text format, text length, and numerical content in gender differences in literacy skills, suggesting that further refining these practices can enhance the fairness and accuracy of literacy assessments. Specifically, we advocate for ongoing research aimed at understanding and minimizing the potential bias introduced by these assessment features. Such efforts are not only crucial for developing instruments that accurately measure literacy skills, but they also yield insights that hold significant implications for educational researchers and practitioners dedicated to creating more equitable assessment environments.