{"title":"What I learned interviewing a robot. Notes on the experimental application of the EIAF methodology using the ChatGPT artificial intelligence tool","authors":"Márcio Carneiro dos Santos","doi":"10.31009/hipertext.net.2023.i26.04","DOIUrl":null,"url":null,"abstract":"This paper describes the EIAF (Exploratory Interview for Assessing Functionalities) methodology and proposes how communication researchers could use it to study characteristics or analyze texts created by generative artificial intelligence algorithms. To do this, an in-depth interview in portuguese, was conducted with the ChatGPT conversation tool, developed by OPENAI. A set of specific categories were developed: self-identification, uses and limitations, validation tests, additional experimentation and metrics, transparency, structuring, precision, variety and updating, all of which were used in the experiment to test the proposed methodology. The experiment showed that despite being correct and well-structured, the code-produced texts needed human revision, essential especially for material that deals with facts and objective data.","PeriodicalId":53242,"journal":{"name":"Hipertextnet","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hipertextnet","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31009/hipertext.net.2023.i26.04","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper describes the EIAF (Exploratory Interview for Assessing Functionalities) methodology and proposes how communication researchers could use it to study characteristics or analyze texts created by generative artificial intelligence algorithms. To do this, an in-depth interview in portuguese, was conducted with the ChatGPT conversation tool, developed by OPENAI. A set of specific categories were developed: self-identification, uses and limitations, validation tests, additional experimentation and metrics, transparency, structuring, precision, variety and updating, all of which were used in the experiment to test the proposed methodology. The experiment showed that despite being correct and well-structured, the code-produced texts needed human revision, essential especially for material that deals with facts and objective data.