Alaa El-Ebshihy, Annisa Maulida Ningtyas, Linda Andersson, Florina Piroi, A. Rauber
{"title":"A Platform for Argumentative Zoning Annotation and Scientific Summarization","authors":"Alaa El-Ebshihy, Annisa Maulida Ningtyas, Linda Andersson, Florina Piroi, A. Rauber","doi":"10.1145/3511808.3557193","DOIUrl":null,"url":null,"abstract":"Argumentative Zoning (AZ) is a tool to obtain informative summaries of scientific articles. Using AZ assumes the definition of the main rhetorical structure in scientific articles, which are, then, used for the summary creation. The unavailability of large AZ annotated benchmark datasets is a bottleneck to training AZ-based summarization algorithms. In this work, we present an annotation platform for an AZ that defines four categories (zones), Claim, Method, Result and Conclusion, that are used to label sentences selected from scientific articles. The proposed tool can be used both for collecting benchmark datasets, and to help the researchers to create their own sub-corpora.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3511808.3557193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Argumentative Zoning (AZ) is a tool to obtain informative summaries of scientific articles. Using AZ assumes the definition of the main rhetorical structure in scientific articles, which are, then, used for the summary creation. The unavailability of large AZ annotated benchmark datasets is a bottleneck to training AZ-based summarization algorithms. In this work, we present an annotation platform for an AZ that defines four categories (zones), Claim, Method, Result and Conclusion, that are used to label sentences selected from scientific articles. The proposed tool can be used both for collecting benchmark datasets, and to help the researchers to create their own sub-corpora.