Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby-Tavor
{"title":"衡量非对抗场景下大型语言模型鲁棒性的新标准","authors":"Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby-Tavor","doi":"arxiv-2408.01963","DOIUrl":null,"url":null,"abstract":"We evaluate the robustness of several large language models on multiple\ndatasets. Robustness here refers to the relative insensitivity of the model's\nanswers to meaning-preserving variants of their input. Benchmark datasets are\nconstructed by introducing naturally-occurring, non-malicious perturbations, or\nby generating semantically equivalent paraphrases of input questions or\nstatements. We further propose a novel metric for assessing a model robustness,\nand demonstrate its benefits in the non-adversarial scenario by empirical\nevaluation of several models on the created datasets.","PeriodicalId":501172,"journal":{"name":"arXiv - STAT - Applications","volume":"62 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios\",\"authors\":\"Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby-Tavor\",\"doi\":\"arxiv-2408.01963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We evaluate the robustness of several large language models on multiple\\ndatasets. Robustness here refers to the relative insensitivity of the model's\\nanswers to meaning-preserving variants of their input. Benchmark datasets are\\nconstructed by introducing naturally-occurring, non-malicious perturbations, or\\nby generating semantically equivalent paraphrases of input questions or\\nstatements. We further propose a novel metric for assessing a model robustness,\\nand demonstrate its benefits in the non-adversarial scenario by empirical\\nevaluation of several models on the created datasets.\",\"PeriodicalId\":501172,\"journal\":{\"name\":\"arXiv - STAT - Applications\",\"volume\":\"62 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.01963\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01963","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios
We evaluate the robustness of several large language models on multiple
datasets. Robustness here refers to the relative insensitivity of the model's
answers to meaning-preserving variants of their input. Benchmark datasets are
constructed by introducing naturally-occurring, non-malicious perturbations, or
by generating semantically equivalent paraphrases of input questions or
statements. We further propose a novel metric for assessing a model robustness,
and demonstrate its benefits in the non-adversarial scenario by empirical
evaluation of several models on the created datasets.