{"title":"DH-CASE II: collaborative annotations in shared environments: metadata, tools and techniques in the digital humanities","authors":"P. Schmitz, L. Pearce, Quinn Dombrowski","doi":"10.1145/2644866.2644898","DOIUrl":"https://doi.org/10.1145/2644866.2644898","url":null,"abstract":"The DH-CASE II Workshop, held in conjunction with ACM Document Engineering 2014, focused on the tools and environments that support annotation, broadly defined, including modeling, authoring, analysis, publication and sharing. Participants explored shared challenges and differing approaches, seeking to identify emerging best practices, as well as those approaches that may have potential for wider application or influence.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"7 1","pages":"211-212"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85405133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Ferreira, R. Lins, F. Freitas, S. Simske, M. Riss
{"title":"A new sentence similarity assessment measure based on a three-layer sentence representation","authors":"Rafael Ferreira, R. Lins, F. Freitas, S. Simske, M. Riss","doi":"10.1145/2644866.2644881","DOIUrl":"https://doi.org/10.1145/2644866.2644881","url":null,"abstract":"Sentence similarity is used to measure the degree of likelihood between sentences. It is used in many natural language applications, such as text summarization, information retrieval, text categorization, and machine translation. The current methods for assessing sentence similarity represent sentences as vectors of bag of words or the syntactic information of the words in the sentence. The degree of likelihood between phrases is calculated by composing the similarity between the words in the sentences. Two important concerns in the area, the meaning problem and the word order, are not handled, however. This paper proposes a new sentence similarity assessment measure that largely improves and refines a recently published method that takes into account the lexical, syntactic and semantic components of sentences. The new method proposed here was benchmarked using a publically available standard dataset. The results obtained show that the new similarity assessment measure proposed outperforms the state of the art systems and achieve results comparable to the evaluation made by humans.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"13 1","pages":"25-34"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77535971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laurent Denoue, S. Carter, Andreas Girgensohn, Matthew L. Cooper
{"title":"Building digital project rooms for web meetings","authors":"Laurent Denoue, S. Carter, Andreas Girgensohn, Matthew L. Cooper","doi":"10.1145/2644866.2644889","DOIUrl":"https://doi.org/10.1145/2644866.2644889","url":null,"abstract":"Distributed teams must co-ordinate a variety of tasks. To do so they need to be able to create, share, and annotate documents as well as discuss plans and goals. Many workflow tools support document sharing, while other tools support videoconferencing. However, there exists little support for connecting the two. In this work, we describe a system that allows users to share and markup content during web meetings. This shared content can provide important conversational props within the context of a meeting; it can also help users review archived meetings. Users can also extract content from meetings directly into their personal notes or other workflow tools.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"54 1","pages":"135-138"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84978811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The evolving scholarly record: new uses and new forms","authors":"C. Lynch","doi":"10.1145/2644866.2644900","DOIUrl":"https://doi.org/10.1145/2644866.2644900","url":null,"abstract":"This presentation will take a very broad view of the emergence of literary corpora as objects of computation, with a particular focus on the various literatures and genres that form the scholarly record. The developments and implications here that I will explore include: the evolution of the scholarly literature into a semi-structured network of information used by both human readers and computational agents through the introduction of markup technologies; the interpenetration and interweaving of data and evidence with the literature; and the creation of an invisible infrastructure of names, taxonomies and ontologies, and the challenges this presents.\u0000 Primary forms of computation on this corpus include both comprehensive text mining and stream analysis (focused on what's new and what's changing as the base of literature and related factual databases expand with reports of new discoveries). I'll explore some of the developments in this area, including some practical considerations about platforms, licensing, and access.\u0000 As the use of the literature evolves, so do the individual genres that comprise it. Today's typical digital journal article looks almost identical to one half a century old, except that it is viewed on screen and printed on demand. Yet there is a great deal of activity driven by the move to data and computationally intensive scholarship, demands for greater precision and replicability in scientific communication, and related sources to move journal articles \"beyond the PDF,\" reconsidering relationships among traditional texts, software, workflows, data and the broad cultural record in its role as evidence. I'll look briefly at some of these developments, with particular focus on what this may mean for the management of the scholarly record as a whole, and also briefly discuss some parallel challenges emerging in scholarly monographs.\u0000 Finally, I will close with a very brief discussion of what might be called corpus-scale thinking with regard to the scholarly record at the disciplinary level. I'll briefly discuss the findings of a 2014 National Research Council study that I co-chaired dealing with the future of the mathematics literature and the possibility of creating a global digital mathematics library, as well as offering some comments on developments in the life sciences. I will also consider the emergence of new corpus-wide tools and standards, such as Web-scale annotation, and some of their implications.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"52 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84914236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ensemble approach for text document clustering using Wikipedia concepts","authors":"Seyednaser Nourashrafeddin, E. Milios, D. Arnold","doi":"10.1145/2644866.2644868","DOIUrl":"https://doi.org/10.1145/2644866.2644868","url":null,"abstract":"Most text clustering algorithms represent a corpus as a document-term matrix in the bag of words model. The feature values are computed based on term frequencies in documents and no semantic relatedness between terms is considered. Therefore, two semantically similar documents may sit in different clusters if they do not share any terms. One solution to this problem is to enrich the document representation using an external resource like Wikipedia. We propose a new way to integrate Wikipedia concepts in partitional text document clustering in this work. A text corpus is first represented as a document-term matrix and a document-concept matrix. Terms that exist in the corpus are then clustered based on the document-term representation. Given the term clusters, we propose two methods, one based on the document-term representation and the other one based on the document-concept representation, to find two sets of seed documents. The two sets are then used in our text clustering algorithm in an ensemble approach to cluster documents. The experimental results show that even though the document-concept representations do not result in good document clusters per se, integrating them in our ensemble approach improves the quality of document clusters significantly.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"32 1","pages":"107-116"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73149588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Pan, Z. Lian, Rongju Sun, Yingmin Tang, Jianguo Xiao
{"title":"FlexiFont: a flexible system to generate personal font libraries","authors":"W. Pan, Z. Lian, Rongju Sun, Yingmin Tang, Jianguo Xiao","doi":"10.1145/2644866.2644886","DOIUrl":"https://doi.org/10.1145/2644866.2644886","url":null,"abstract":"This paper proposes FlexiFont, a system designed to generate personal font libraries from the camera-captured character images. Compared with existing methods, our system is able to process most kinds of languages and the generated font libraries can be extended by adding new characters based on the user's requirement. Moreover, digital cameras instead of scanners are chosen as the input devices, so that it is more convenient for common people to use the system. First of all, the users should choose a default template or define their own templates, then write the characters on the printed templates according to the certain instructions. After the users upload the photos of the templates with written characters, the system will automatically correct the perspective and split the whole photo into a set of individual character images. As the final step, FlexiFont will denoise, vectorize, and normalize each character image before storing it into a TrueType file. Experimental results demonstrate the robustness and efficiency of our system.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"1 1","pages":"17-20"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74473544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"P-GTM: privacy-preserving google tri-gram method for semantic text similarity","authors":"O. Davison, A. Mohammad, E. Milios","doi":"10.1145/2644866.2644882","DOIUrl":"https://doi.org/10.1145/2644866.2644882","url":null,"abstract":"This paper presents P-GTM, a privacy-preserving text similarity algorithm that extends the Google Tri-gram Method (GTM). The Google Tri-gram Method is a high-performance unsupervised semantic text similarity method based on the use of context from the Google Web 1T n-gram dataset. P-GTM computes the semantic similarity between two input bag-of-words documents on public cloud hardware, without disclosing the documents' contents. Like the GTM, P-GTM requires the uni-gram and tri-gram lists from the Google Web 1T n-gram dataset as additional inputs. The need for these additional lists makes private computation of GTM text similarities a challenging problem. P-GTM uses a combination of pre-computation, encryption, and randomized preprocessing to enable private computation of text similarities using the GTM. We discuss the security of the algorithm and quantify its privacy using standard and real life corpora.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"33 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76241857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DOCENG 2014: PDF tutorial","authors":"S. Bagley, Matthew R. B. Hardy","doi":"10.1145/2644866.2644899","DOIUrl":"https://doi.org/10.1145/2644866.2644899","url":null,"abstract":"Many billions of documents are stored in the Portable Document Format (PDF). These documents contain a wealth of information and yet PDF is often seen as an inaccessible format and, for that reason, often gets a very bad press. In this tutorial, we get under the hood of PDF and analyze the poor practices that cause PDF files to be inaccessible. We discuss how to access the text and graphics within a PDF and we identify those features of PDF that can be used to make the information much more accessible. We also discuss some of the new ISO standards that provide profiles for producing Accessible PDF files.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"1 1","pages":"213-214"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78454388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF","authors":"Alan Souza, V. Moreira, C. Heuser","doi":"10.1145/2644866.2644872","DOIUrl":"https://doi.org/10.1145/2644866.2644872","url":null,"abstract":"Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"90 1","pages":"121-130"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83278093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rinaldo Lima, Jamilson Batista, Rafael Ferreira, F. Freitas, R. Lins, S. Simske, M. Riss
{"title":"Transforming graph-based sentence representations to alleviate overfitting in relation extraction","authors":"Rinaldo Lima, Jamilson Batista, Rafael Ferreira, F. Freitas, R. Lins, S. Simske, M. Riss","doi":"10.1145/2644866.2644875","DOIUrl":"https://doi.org/10.1145/2644866.2644875","url":null,"abstract":"Relation extraction (RE) aims at finding the way entities, such as person, location, organization, date, etc., depend upon each other in a text document. Ontology Population, Automatic Summarization, and Question Answering are fields in which relation extraction offers valuable solutions. A relation extraction method based on inductive logic programming that induces extraction rules suitable to identify semantic relations between entities was proposed by the authors in a previous work. This paper proposes a method to simplify graph-based representations of sentences that replaces dependency graphs of sentences by simpler ones, keeping the target entities in it. The goal is to speed up the learning phase in a RE framework, by applying several rules for graph simplification that constrain the hypothesis space for generating extraction rules. Moreover, the direct impact on the extraction performance results is also investigated. The proposed techniques outperformed some other state-of-the-art systems when assessed on two standard datasets for relation extraction in the biomedical domain.","PeriodicalId":91385,"journal":{"name":"Proceedings of the ACM Symposium on Document Engineering. ACM Symposium on Document Engineering","volume":"46 1","pages":"53-62"},"PeriodicalIF":0.0,"publicationDate":"2014-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79106941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}