Using a self-attention architecture to automate valence categorization of French teenagers' free descriptions of their family relationships. A proof of concept.
M. Sedki, N. Vidal, P. Roux, C. Barry, M. Speranza, B. Falissard, E. Brunet-Gouet
{"title":"Using a self-attention architecture to automate valence categorization of French teenagers' free descriptions of their family relationships. A proof of concept.","authors":"M. Sedki, N. Vidal, P. Roux, C. Barry, M. Speranza, B. Falissard, E. Brunet-Gouet","doi":"10.1101/2023.01.16.23284557","DOIUrl":null,"url":null,"abstract":"This paper proposes a proof of concept of using natural language processing techniques to categorize valence of family relationships described in free texts written by french teenagers. The proposed study traces the evolution of techniques for word embedding. After decomposing the different texts in our possession into short texts composed of sentences and manual labeling, we tested different word embedding scenarios to train a multi-label classification model where a text can take several labels: labels describing the family link between the teenager and the person mentioned in the text and labels describing the teenager's relationship with them positive/negative/neutral valence). The natural baseline for word vector representation of our texts is to build a TF-IDF and train classical classifiers (Elasticnet logistic regression, gradient boosting, random forest, support vector classifier) after selecting a model by cross validation in each class of machine learning models. We then studied the strengths of word-vectors embeddings by an advanced language representation technique via the CamemBERT transformer model, and, again, used them with classical classifiers to compare their respective performances. The last scenario consisted in augmenting the CamemBERT with output dense layers (perceptron) representing a classifier adapted to the multi-label classification and fine-tuning the CamemBERT original layers. The optimal fine-tuning depth that achieves a bias-variance trade-off was obtained by a cross-validation procedure. The results of the comparison of the three scenarios on a test dataset show a clear improvement of the classification performances of the scenario with fine-tuning beyond the baseline and of a simple vectorization using CamemBERT without fine-tuning. Despite the moderate size of the dataset and the input texts, fine-tuning to an optimal depth remains the best solution to build a classifier.","PeriodicalId":73815,"journal":{"name":"Journal of medical artificial intelligence","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2023.01.16.23284557","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a proof of concept of using natural language processing techniques to categorize valence of family relationships described in free texts written by french teenagers. The proposed study traces the evolution of techniques for word embedding. After decomposing the different texts in our possession into short texts composed of sentences and manual labeling, we tested different word embedding scenarios to train a multi-label classification model where a text can take several labels: labels describing the family link between the teenager and the person mentioned in the text and labels describing the teenager's relationship with them positive/negative/neutral valence). The natural baseline for word vector representation of our texts is to build a TF-IDF and train classical classifiers (Elasticnet logistic regression, gradient boosting, random forest, support vector classifier) after selecting a model by cross validation in each class of machine learning models. We then studied the strengths of word-vectors embeddings by an advanced language representation technique via the CamemBERT transformer model, and, again, used them with classical classifiers to compare their respective performances. The last scenario consisted in augmenting the CamemBERT with output dense layers (perceptron) representing a classifier adapted to the multi-label classification and fine-tuning the CamemBERT original layers. The optimal fine-tuning depth that achieves a bias-variance trade-off was obtained by a cross-validation procedure. The results of the comparison of the three scenarios on a test dataset show a clear improvement of the classification performances of the scenario with fine-tuning beyond the baseline and of a simple vectorization using CamemBERT without fine-tuning. Despite the moderate size of the dataset and the input texts, fine-tuning to an optimal depth remains the best solution to build a classifier.