Marcelo Sartori Locatelli, Matheus Prado Miranda, Igor Joaquim da Silva Costa, Matheus Torres Prates, Victor Thomé, Mateus Zaparoli Monteiro, Tomas Lacerda, Adriana Pagano, Eduardo Rios Neto, Wagner Meira Jr., Virgilio Almeida
{"title":"Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil","authors":"Marcelo Sartori Locatelli, Matheus Prado Miranda, Igor Joaquim da Silva Costa, Matheus Torres Prates, Victor Thomé, Mateus Zaparoli Monteiro, Tomas Lacerda, Adriana Pagano, Eduardo Rios Neto, Wagner Meira Jr., Virgilio Almeida","doi":"arxiv-2408.05035","DOIUrl":null,"url":null,"abstract":"The Exame Nacional do Ensino M\\'edio (ENEM) is a pivotal test for Brazilian\nstudents, required for admission to a significant number of universities in\nBrazil. The test consists of four objective high-school level tests on Math,\nHumanities, Natural Sciences and Languages, and one writing essay. Students'\nanswers to the test and to the accompanying socioeconomic status questionnaire\nare made public every year (albeit anonymized) due to transparency policies\nfrom the Brazilian Government. In the context of large language models (LLMs),\nthese data lend themselves nicely to comparing different groups of humans with\nAI, as we can have access to human and machine answer distributions. We\nleverage these characteristics of the ENEM dataset and compare GPT-3.5 and 4,\nand MariTalk, a model trained using Portuguese data, to humans, aiming to\nascertain how their answers relate to real societal groups and what that may\nreveal about the model biases. We divide the human groups by using\nsocioeconomic status (SES), and compare their answer distribution with LLMs for\neach question and for the essay. We find no significant biases when comparing\nLLM performance to humans on the multiple-choice Brazilian Portuguese tests, as\nthe distance between model and human answers is mostly determined by the human\naccuracy. A similar conclusion is found by looking at the generated text as,\nwhen analyzing the essays, we observe that human and LLM essays differ in a few\nkey factors, one being the choice of words where model essays were easily\nseparable from human ones. The texts also differ syntactically, with LLM\ngenerated essays exhibiting, on average, smaller sentences and less thought\nunits, among other differences. These results suggest that, for Brazilian\nPortuguese in the ENEM context, LLM outputs represent no group of humans, being\nsignificantly different from the answers from Brazilian students across all\ntests.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"107 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The Exame Nacional do Ensino M\'edio (ENEM) is a pivotal test for Brazilian
students, required for admission to a significant number of universities in
Brazil. The test consists of four objective high-school level tests on Math,
Humanities, Natural Sciences and Languages, and one writing essay. Students'
answers to the test and to the accompanying socioeconomic status questionnaire
are made public every year (albeit anonymized) due to transparency policies
from the Brazilian Government. In the context of large language models (LLMs),
these data lend themselves nicely to comparing different groups of humans with
AI, as we can have access to human and machine answer distributions. We
leverage these characteristics of the ENEM dataset and compare GPT-3.5 and 4,
and MariTalk, a model trained using Portuguese data, to humans, aiming to
ascertain how their answers relate to real societal groups and what that may
reveal about the model biases. We divide the human groups by using
socioeconomic status (SES), and compare their answer distribution with LLMs for
each question and for the essay. We find no significant biases when comparing
LLM performance to humans on the multiple-choice Brazilian Portuguese tests, as
the distance between model and human answers is mostly determined by the human
accuracy. A similar conclusion is found by looking at the generated text as,
when analyzing the essays, we observe that human and LLM essays differ in a few
key factors, one being the choice of words where model essays were easily
separable from human ones. The texts also differ syntactically, with LLM
generated essays exhibiting, on average, smaller sentences and less thought
units, among other differences. These results suggest that, for Brazilian
Portuguese in the ENEM context, LLM outputs represent no group of humans, being
significantly different from the answers from Brazilian students across all
tests.