Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)最新文献

筛选
英文 中文
The present conditions, problems and future direction of the server-controlled clinical pathway system development in psychiatric hospitals 精神病院服务器控制临床路径系统发展的现状、问题及未来发展方向
Mai Date, T. Tanioka, Yuko Yasuhara, Kazuyuki Matsumoto, Yukie Iwasa, Chiemi Kawanishi, Eri Hirai, Fuji Ren
{"title":"The present conditions, problems and future direction of the server-controlled clinical pathway system development in psychiatric hospitals","authors":"Mai Date, T. Tanioka, Yuko Yasuhara, Kazuyuki Matsumoto, Yukie Iwasa, Chiemi Kawanishi, Eri Hirai, Fuji Ren","doi":"10.1109/NLPKE.2010.5587812","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587812","url":null,"abstract":"Clinical pathways used today are paper-based, although many different kinds of clinical pathway are used in clinical practice. However, in the development of software for clinical pathway, it is difficult to achieve cooperation between medical experts, who are not used to expressing their ideas and work in words, and system-developers whose medical knowledge is limited. As a consequence, the current situation is that medical practitioners make their own software and use it in their practice. While this may work, it is less than ideal because refinements that may make the software most effective through engineering expertise is not used. Thus, in our research team, the nurse researchers and the engineering researchers cooperated and developed a clinical pathway system. In this paper, the present conditions, problems, and future direction of the server-controlled CP system development in the psychiatric hospitals, is discussed from viewpoint of nursing as a user.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123925355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obtaining chinese semantic knowledge from online encyclopedia 从在线百科全书中获取汉语语义知识
Liu Yang, Tingting He, Xinhui Tu, Jinguang Chen
{"title":"Obtaining chinese semantic knowledge from online encyclopedia","authors":"Liu Yang, Tingting He, Xinhui Tu, Jinguang Chen","doi":"10.1109/NLPKE.2010.5587787","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587787","url":null,"abstract":"This paper proposes a method to obtain the semantic knowledge from an online encyclopedia called Hudong encyclopedia 2(hudong baike). We obtain concepts and then their semantic related concepts and compute the semantic relatedness by utilizing inner hyperlinks and the open category information in Hudong encyclopedia. By comparing our results with human judgments, we show that our relatedness computing method is quite effective.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115208504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wisdom media “CAIWA Channel” based on natural language interface agent 智慧媒体“CAIWA频道”基于自然语言接口代理
Takuo Henmi, Shengyang Huang, F. Ren
{"title":"Wisdom media “CAIWA Channel” based on natural language interface agent","authors":"Takuo Henmi, Shengyang Huang, F. Ren","doi":"10.1109/NLPKE.2010.5587862","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587862","url":null,"abstract":"Based on the breakthroughs in natural language interface agent “CAIWA,” web content distribution platform “CAIWA Channel” has been built for both PC and smatphone clients. The platform is regarded as wisdom media, since it seeks for providing information precisely what users demand, in addition to removing barriers of user interface bottlenecks. Not only recommendations of information to the user based on the past behavior, interest and profile, but also the CAIWA Channel has built-in evolution mechanism by proactively collecting information about the user in a natural manner via conversation with the user. It can accumulate knowledge for better meeting the user's personal requirements, while web contents are organized according to the user's interest. CAIWA Channel is not a mere information search system but a knowledge query system and a human-touch system incorporating emotional reactions.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114527305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic role labeling for Bengali using 5Ws 使用5w的孟加拉语语义角色标注
Amitava Das, Aniruddha Ghosh, Sivaji Bandyopadhyay
{"title":"Semantic role labeling for Bengali using 5Ws","authors":"Amitava Das, Aniruddha Ghosh, Sivaji Bandyopadhyay","doi":"10.1109/NLPKE.2010.5587772","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587772","url":null,"abstract":"In this paper we present different methodologies to extract semantic role labels of Bengali nouns using 5W distilling. The 5W task seeks to extract the semantic information of nouns in a natural language sentence by distilling it into the answers to the 5W questions: Who, What, When, Where and Why. As Bengali is a resource constraint language, the building of annotated gold standard corpus and acquisition of linguistics tools for features extraction are described in this paper. The tag label wise reported precision values of the present system are: 79.56% (Who), 65.45% (What), 73.35% (When), 77.66% (Where) and 63.50% (Why).","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114588655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Use relative weight to improve the kNN for unbalanced text category 使用相对权重来改善不平衡文本类别的kNN
Xiaodong Liu, F. Ren, Caixia Yuan
{"title":"Use relative weight to improve the kNN for unbalanced text category","authors":"Xiaodong Liu, F. Ren, Caixia Yuan","doi":"10.1109/NLPKE.2010.5587799","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587799","url":null,"abstract":"The technology of text category is widely used in natural language processing. As one of best text category algorithms, kNN is very popular used in many applications. Traditional kNN assumes that the distribution of training data is even, however, it is not the case for many situations. When we used kNN in our Topic Detection and Tracking (TDT) system, it did not perform well due to the bias of training data set. To overcome the obstacle caused by data bias, this paper proposes an approach which uses relative weight to adjust the weight of kNN (RWKNN). When evaluated on the data of TDT2 and TDT3 Chinese corpus, RWKNN proves to be robust on unbalanced data and yields better performance than the traditional kNN.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A hybrid-strategy method combining semantic analysis with rule-based MT for patent machine translation 一种结合语义分析和规则机器翻译的专利机器翻译混合策略方法
Yaohong Jin
{"title":"A hybrid-strategy method combining semantic analysis with rule-based MT for patent machine translation","authors":"Yaohong Jin","doi":"10.1109/NLPKE.2010.5587763","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587763","url":null,"abstract":"This paper presents a hybrid method combining semantic analysis with rule-based MT for patent machine translation. Based on the theory of Hierarchical Network of Concepts, the semantic analysis used the lv principle to deal with the ambiguity of multiple verbs and the boundary of long NP. The determination of main verb can help to select the right syntax tree, and the boundary detection of long NP can help to schedule the process of syntax. From the result of the experiments, we can see that this hybrid-strategy method can effectively improve the performance of Chinese-English patent machine translation.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116768375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Extraction of purpose data using surface text patterns 使用表面文本模式提取目的数据
P. K. Mayee, R. Sangal, Soma Paul
{"title":"Extraction of purpose data using surface text patterns","authors":"P. K. Mayee, R. Sangal, Soma Paul","doi":"10.1109/NLPKE.2010.5587860","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587860","url":null,"abstract":"This paper presents the concept of surface text patterns for extracting purpose data from the web. In order to obtain an optimal set of patterns, we have developed a method for learning purpose patterns automatically. A corpus was downloaded from the Internet using bootstrapping by providing a few hand-crafted examples of each purpose pattern to a generic search engine. This corpus was then tagged and patterns were extracted from the returned documents by automated means and standardized. The precision of each pattern and the average precision for each group were computed. The extracted patterns were then used to extract purpose data. The results for extraction from the web have been reported.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123754103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and correction of real-word spelling errors in Persian language 波斯语实词拼写错误的检测与纠正
Heshaam Faili
{"title":"Detection and correction of real-word spelling errors in Persian language","authors":"Heshaam Faili","doi":"10.1109/NLPKE.2010.5587806","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587806","url":null,"abstract":"Several statistical methods have already been proposed to detect and correct the real-word errors of a context. However, to the best of our knowledge, none of them has been applied on Persian language yet. In this paper, a statistical method based on mutual information of Persian words to deal with context sensitive spelling errors is presented. Different experiments show the accuracy of correction method on a test data which only contains one real-word error in each sentence to be about 80.5% and 87% with respect to precision and recall metrics.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127274967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An error driven method to improve rules for the recognition of Chinese modality “LE” 改进汉语情态“LE”识别规则的误差驱动方法
Yihui Zhou, Hongying Zan, Lingling Mu, Yingcheng Yuan
{"title":"An error driven method to improve rules for the recognition of Chinese modality “LE”","authors":"Yihui Zhou, Hongying Zan, Lingling Mu, Yingcheng Yuan","doi":"10.1109/NLPKE.2010.5587825","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587825","url":null,"abstract":"We have a “Trinity” way for the recognition of Chinese modality “LE”, in which dictionary, usage rule base and usage corpora combine as the knowledge base. Handcrafted rules can hardly cover all usages in the real texts. So this paper proposes an error driven method for the automatic rules improvement. Experimental results show that, after the automatic rules improvement, the recognition precision of the modality “LE” improves by over 1.85%.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"269 10-13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132879809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An unsupervised approach to preposition error correction 一种无监督的介词纠错方法
Aminul Islam, D. Inkpen
{"title":"An unsupervised approach to preposition error correction","authors":"Aminul Islam, D. Inkpen","doi":"10.1109/NLPKE.2010.5587782","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587782","url":null,"abstract":"In this work, an unsupervised statistical method for automatic correction of preposition errors using the Google n-gram data set is presented and compared to the state-of-the-art. We use the Google n-gram data set in a back-off fashion that increases the performance of the method. The method works automatically, does not require any human-annotated knowledge resources (e.g., ontologies) and can be applied to English language texts, including non-native (L2) ones in which preposition errors are known to be numerous. The method can be applied to other languages for which Google n-grams are available.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信