Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp
{"title":"利用 LLM 构建历史文献真实性评估结构","authors":"Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp","doi":"arxiv-2407.09290","DOIUrl":null,"url":null,"abstract":"Given the wide use of forgery throughout history, scholars have and are\ncontinuously engaged in assessing the authenticity of historical documents.\nHowever, online catalogues merely offer descriptive metadata for these\ndocuments, relegating discussions about their authenticity to free-text\nformats, making it difficult to study these assessments at scale. This study\nexplores the generation of structured data about documents' authenticity\nassessment from natural language texts. Our pipeline exploits Large Language\nModels (LLMs) to select, extract and classify relevant claims about the topic\nwithout the need for training, and Semantic Web technologies to structure and\ntype-validate the LLM's results. The final output is a catalogue of documents\nwhose authenticity has been debated, along with scholars' opinions on their\nauthenticity. This process can serve as a valuable resource for integration\ninto catalogues, allowing room for more intricate queries and analyses on the\nevolution of these debates over centuries.","PeriodicalId":501285,"journal":{"name":"arXiv - CS - Digital Libraries","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Structuring Authenticity Assessments on Historical Documents using LLMs\",\"authors\":\"Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp\",\"doi\":\"arxiv-2407.09290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Given the wide use of forgery throughout history, scholars have and are\\ncontinuously engaged in assessing the authenticity of historical documents.\\nHowever, online catalogues merely offer descriptive metadata for these\\ndocuments, relegating discussions about their authenticity to free-text\\nformats, making it difficult to study these assessments at scale. This study\\nexplores the generation of structured data about documents' authenticity\\nassessment from natural language texts. Our pipeline exploits Large Language\\nModels (LLMs) to select, extract and classify relevant claims about the topic\\nwithout the need for training, and Semantic Web technologies to structure and\\ntype-validate the LLM's results. The final output is a catalogue of documents\\nwhose authenticity has been debated, along with scholars' opinions on their\\nauthenticity. This process can serve as a valuable resource for integration\\ninto catalogues, allowing room for more intricate queries and analyses on the\\nevolution of these debates over centuries.\",\"PeriodicalId\":501285,\"journal\":{\"name\":\"arXiv - CS - Digital Libraries\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Digital Libraries\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.09290\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Digital Libraries","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.09290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Structuring Authenticity Assessments on Historical Documents using LLMs
Given the wide use of forgery throughout history, scholars have and are
continuously engaged in assessing the authenticity of historical documents.
However, online catalogues merely offer descriptive metadata for these
documents, relegating discussions about their authenticity to free-text
formats, making it difficult to study these assessments at scale. This study
explores the generation of structured data about documents' authenticity
assessment from natural language texts. Our pipeline exploits Large Language
Models (LLMs) to select, extract and classify relevant claims about the topic
without the need for training, and Semantic Web technologies to structure and
type-validate the LLM's results. The final output is a catalogue of documents
whose authenticity has been debated, along with scholars' opinions on their
authenticity. This process can serve as a valuable resource for integration
into catalogues, allowing room for more intricate queries and analyses on the
evolution of these debates over centuries.