{"title":"探索异构特征表示在文档布局理解中的应用","authors":"Guosheng Feng, Danqing Huang, Chin-Yew Lin, Damjan Dakic, Milos Milunovic, Tamara Stankovic, Igor Ilic","doi":"10.1109/ICTAI56018.2022.00046","DOIUrl":null,"url":null,"abstract":"There are increasing interests in document layout representation learning and understanding. Transformer, with its great power, has become the mainstream model architecture and achieved promising results in this area. As elements in a document layout consist of multi-modal and multi-dimensional features such as position, size, and its text content, prior works represent each element by summing all feature embeddings into one unified vector in the input layer, which is then fed into the self-attention for element-wise interaction. However, this simple summation would potentially raise mixed correlations among heterogeneous features and bring noise to the representation learning. In this paper, we propose a novel two-step disentangled attention mechanism to allow more flexible feature interactions in the self-attention. Furthermore, inspired by the principles of document design (e.g., contrast, proximity), we propose an unsupervised learning objective to constrain the layout representations. We verify our approach on two layout understanding tasks, namely element role labeling and image captioning. Experiment results show that our approach achieves state-of-the-art performances. Moreover, we conduct extensive studies and observe better interpretability using our approach.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring Heterogeneous Feature Representation for Document Layout Understanding\",\"authors\":\"Guosheng Feng, Danqing Huang, Chin-Yew Lin, Damjan Dakic, Milos Milunovic, Tamara Stankovic, Igor Ilic\",\"doi\":\"10.1109/ICTAI56018.2022.00046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are increasing interests in document layout representation learning and understanding. Transformer, with its great power, has become the mainstream model architecture and achieved promising results in this area. As elements in a document layout consist of multi-modal and multi-dimensional features such as position, size, and its text content, prior works represent each element by summing all feature embeddings into one unified vector in the input layer, which is then fed into the self-attention for element-wise interaction. However, this simple summation would potentially raise mixed correlations among heterogeneous features and bring noise to the representation learning. In this paper, we propose a novel two-step disentangled attention mechanism to allow more flexible feature interactions in the self-attention. Furthermore, inspired by the principles of document design (e.g., contrast, proximity), we propose an unsupervised learning objective to constrain the layout representations. We verify our approach on two layout understanding tasks, namely element role labeling and image captioning. Experiment results show that our approach achieves state-of-the-art performances. Moreover, we conduct extensive studies and observe better interpretability using our approach.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"84 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring Heterogeneous Feature Representation for Document Layout Understanding
There are increasing interests in document layout representation learning and understanding. Transformer, with its great power, has become the mainstream model architecture and achieved promising results in this area. As elements in a document layout consist of multi-modal and multi-dimensional features such as position, size, and its text content, prior works represent each element by summing all feature embeddings into one unified vector in the input layer, which is then fed into the self-attention for element-wise interaction. However, this simple summation would potentially raise mixed correlations among heterogeneous features and bring noise to the representation learning. In this paper, we propose a novel two-step disentangled attention mechanism to allow more flexible feature interactions in the self-attention. Furthermore, inspired by the principles of document design (e.g., contrast, proximity), we propose an unsupervised learning objective to constrain the layout representations. We verify our approach on two layout understanding tasks, namely element role labeling and image captioning. Experiment results show that our approach achieves state-of-the-art performances. Moreover, we conduct extensive studies and observe better interpretability using our approach.