Mai Date, T. Tanioka, Yuko Yasuhara, Kazuyuki Matsumoto, Yukie Iwasa, Chiemi Kawanishi, Eri Hirai, Fuji Ren
{"title":"The present conditions, problems and future direction of the server-controlled clinical pathway system development in psychiatric hospitals","authors":"Mai Date, T. Tanioka, Yuko Yasuhara, Kazuyuki Matsumoto, Yukie Iwasa, Chiemi Kawanishi, Eri Hirai, Fuji Ren","doi":"10.1109/NLPKE.2010.5587812","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587812","url":null,"abstract":"Clinical pathways used today are paper-based, although many different kinds of clinical pathway are used in clinical practice. However, in the development of software for clinical pathway, it is difficult to achieve cooperation between medical experts, who are not used to expressing their ideas and work in words, and system-developers whose medical knowledge is limited. As a consequence, the current situation is that medical practitioners make their own software and use it in their practice. While this may work, it is less than ideal because refinements that may make the software most effective through engineering expertise is not used. Thus, in our research team, the nurse researchers and the engineering researchers cooperated and developed a clinical pathway system. In this paper, the present conditions, problems, and future direction of the server-controlled CP system development in the psychiatric hospitals, is discussed from viewpoint of nursing as a user.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123925355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Obtaining chinese semantic knowledge from online encyclopedia","authors":"Liu Yang, Tingting He, Xinhui Tu, Jinguang Chen","doi":"10.1109/NLPKE.2010.5587787","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587787","url":null,"abstract":"This paper proposes a method to obtain the semantic knowledge from an online encyclopedia called Hudong encyclopedia 2(hudong baike). We obtain concepts and then their semantic related concepts and compute the semantic relatedness by utilizing inner hyperlinks and the open category information in Hudong encyclopedia. By comparing our results with human judgments, we show that our relatedness computing method is quite effective.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115208504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wisdom media “CAIWA Channel” based on natural language interface agent","authors":"Takuo Henmi, Shengyang Huang, F. Ren","doi":"10.1109/NLPKE.2010.5587862","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587862","url":null,"abstract":"Based on the breakthroughs in natural language interface agent “CAIWA,” web content distribution platform “CAIWA Channel” has been built for both PC and smatphone clients. The platform is regarded as wisdom media, since it seeks for providing information precisely what users demand, in addition to removing barriers of user interface bottlenecks. Not only recommendations of information to the user based on the past behavior, interest and profile, but also the CAIWA Channel has built-in evolution mechanism by proactively collecting information about the user in a natural manner via conversation with the user. It can accumulate knowledge for better meeting the user's personal requirements, while web contents are organized according to the user's interest. CAIWA Channel is not a mere information search system but a knowledge query system and a human-touch system incorporating emotional reactions.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114527305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic role labeling for Bengali using 5Ws","authors":"Amitava Das, Aniruddha Ghosh, Sivaji Bandyopadhyay","doi":"10.1109/NLPKE.2010.5587772","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587772","url":null,"abstract":"In this paper we present different methodologies to extract semantic role labels of Bengali nouns using 5W distilling. The 5W task seeks to extract the semantic information of nouns in a natural language sentence by distilling it into the answers to the 5W questions: Who, What, When, Where and Why. As Bengali is a resource constraint language, the building of annotated gold standard corpus and acquisition of linguistics tools for features extraction are described in this paper. The tag label wise reported precision values of the present system are: 79.56% (Who), 65.45% (What), 73.35% (When), 77.66% (Where) and 63.50% (Why).","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114588655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use relative weight to improve the kNN for unbalanced text category","authors":"Xiaodong Liu, F. Ren, Caixia Yuan","doi":"10.1109/NLPKE.2010.5587799","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587799","url":null,"abstract":"The technology of text category is widely used in natural language processing. As one of best text category algorithms, kNN is very popular used in many applications. Traditional kNN assumes that the distribution of training data is even, however, it is not the case for many situations. When we used kNN in our Topic Detection and Tracking (TDT) system, it did not perform well due to the bias of training data set. To overcome the obstacle caused by data bias, this paper proposes an approach which uses relative weight to adjust the weight of kNN (RWKNN). When evaluated on the data of TDT2 and TDT3 Chinese corpus, RWKNN proves to be robust on unbalanced data and yields better performance than the traditional kNN.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A hybrid-strategy method combining semantic analysis with rule-based MT for patent machine translation","authors":"Yaohong Jin","doi":"10.1109/NLPKE.2010.5587763","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587763","url":null,"abstract":"This paper presents a hybrid method combining semantic analysis with rule-based MT for patent machine translation. Based on the theory of Hierarchical Network of Concepts, the semantic analysis used the lv principle to deal with the ambiguity of multiple verbs and the boundary of long NP. The determination of main verb can help to select the right syntax tree, and the boundary detection of long NP can help to schedule the process of syntax. From the result of the experiments, we can see that this hybrid-strategy method can effectively improve the performance of Chinese-English patent machine translation.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116768375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of purpose data using surface text patterns","authors":"P. K. Mayee, R. Sangal, Soma Paul","doi":"10.1109/NLPKE.2010.5587860","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587860","url":null,"abstract":"This paper presents the concept of surface text patterns for extracting purpose data from the web. In order to obtain an optimal set of patterns, we have developed a method for learning purpose patterns automatically. A corpus was downloaded from the Internet using bootstrapping by providing a few hand-crafted examples of each purpose pattern to a generic search engine. This corpus was then tagged and patterns were extracted from the returned documents by automated means and standardized. The precision of each pattern and the average precision for each group were computed. The extracted patterns were then used to extract purpose data. The results for extraction from the web have been reported.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123754103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and correction of real-word spelling errors in Persian language","authors":"Heshaam Faili","doi":"10.1109/NLPKE.2010.5587806","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587806","url":null,"abstract":"Several statistical methods have already been proposed to detect and correct the real-word errors of a context. However, to the best of our knowledge, none of them has been applied on Persian language yet. In this paper, a statistical method based on mutual information of Persian words to deal with context sensitive spelling errors is presented. Different experiments show the accuracy of correction method on a test data which only contains one real-word error in each sentence to be about 80.5% and 87% with respect to precision and recall metrics.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127274967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An error driven method to improve rules for the recognition of Chinese modality “LE”","authors":"Yihui Zhou, Hongying Zan, Lingling Mu, Yingcheng Yuan","doi":"10.1109/NLPKE.2010.5587825","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587825","url":null,"abstract":"We have a “Trinity” way for the recognition of Chinese modality “LE”, in which dictionary, usage rule base and usage corpora combine as the knowledge base. Handcrafted rules can hardly cover all usages in the real texts. So this paper proposes an error driven method for the automatic rules improvement. Experimental results show that, after the automatic rules improvement, the recognition precision of the modality “LE” improves by over 1.85%.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"269 10-13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132879809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An unsupervised approach to preposition error correction","authors":"Aminul Islam, D. Inkpen","doi":"10.1109/NLPKE.2010.5587782","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587782","url":null,"abstract":"In this work, an unsupervised statistical method for automatic correction of preposition errors using the Google n-gram data set is presented and compared to the state-of-the-art. We use the Google n-gram data set in a back-off fashion that increases the performance of the method. The method works automatically, does not require any human-annotated knowledge resources (e.g., ontologies) and can be applied to English language texts, including non-native (L2) ones in which preposition errors are known to be numerous. The method can be applied to other languages for which Google n-grams are available.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}