{"title":"DNS meets DHT: treating massive ID resolution using DNS over DHT","authors":"Yusuke Doi","doi":"10.1109/SAINT.2005.22","DOIUrl":"https://doi.org/10.1109/SAINT.2005.22","url":null,"abstract":"Object identifiers such as electronic product codes are likely to have static parts with few layers and a serial number layer. The structure does not match well-known name systems such as DNS (domain name system) and DHT (distributed hash table). To utilize object identifiers in applications such as product traceability systems, a name system to support the structure is required. The author proposes a name system that combines DHT and DNS, and describes how to eliminate bottlenecks between the two name systems. Using the distributed nature of DHT, cost-consuming processes such as protocol translation are distributed. A set of gateways that executes DNS name delegation dynamically is used to bind between a client side DNS resolver and translators running on DHT nodes. The author also estimates required traffic bandwidth on a gateway server. Only 3.5 Mbps on a gateway sender is required to support loads as heavy as of root name server.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121829395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fault-tolerant routing for P2P systems with unstructured topology","authors":"L. Mariani","doi":"10.1109/SAINT.2005.30","DOIUrl":"https://doi.org/10.1109/SAINT.2005.30","url":null,"abstract":"New application scenarios, such as Internet-scale computations, nomadic networks and mobile systems, require decentralized, scalable and open infrastructures. The peer-to-peer (P2P) paradigm has been recently proposed to address the construction of completely decentralized systems for the above mentioned environments, but P2P systems frequently lack of dependability. In this paper, we propose an algorithm for increasing fault-tolerance by dynamically adding redundant links to P2P systems with unstructured topology. The algorithm requires only local interactions, is executed asynchronously by each peer and guarantees that the disappearance of any single peer does not affect the overall performance and routing capabilities of the system.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121837175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory management of density-based spam detector","authors":"Kenichi Yoshida, Fuminori Adachi, Takashi Washio, H. Motoda, Teruaki Homma, Akihiro Nakashima, Hiromitsu Fujikawa, Katsuyuki Yamazaki","doi":"10.1109/SAINT.2005.38","DOIUrl":"https://doi.org/10.1109/SAINT.2005.38","url":null,"abstract":"The volume of mass unsolicited electronic mail, often known as spam, has recently increased enormously and has become a serious threat to not only the Internet but also to society. A new spam detection method which uses document space density information has been proposed. Although the proposed method requires extensive e-mail traffic to acquire the necessary information, it can achieve perfect detection (i.e., both recall and precision is 100%) under practical conditions. This paper describes the memory management mechanism of this new spam detection method. Although the \"least recently used\" strategy is the standard memory management strategy, we show that 1) the use of the direct-mapped cache can be used as a substitute for the LRU cache, and 2) \"retaining multiply accessed entries\" strategy can further improve the memory management performance and improve the theoretical recall rate for spam detection.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"22 6S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115944309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sadanori Aoyagi, M. Takizawa, M. Saito, H. Aida, H. Tokuda
{"title":"ELA: a fully distributed VPN system over peer-to-peer network","authors":"Sadanori Aoyagi, M. Takizawa, M. Saito, H. Aida, H. Tokuda","doi":"10.1109/SAINT.2005.25","DOIUrl":"https://doi.org/10.1109/SAINT.2005.25","url":null,"abstract":"In this paper, we propose a fully distributed VPN system over peer-to-peer(P2P) network called Everywhere Local Area network (ELA). ELA enables to establish private overlay network for VPN among nodes of a group without any servers. As opposed to the existing VPN systems, nodes of a group can build VPN without setting up a VPN server, and there is no problem of a single-source bottleneck and a single point of failure. Though it is known that VPN system using TCP as tunneling protocol does not work well, there are some nodes which can use only TCP because of NAT or Firewall. Therefore each node uses both UDP and TCP appropriately depending on the situation in ELA. The topology of ELA-VPN mitigates performance deterioration. We implemented a prototype of ELA on Linux, and show result of experimental latency between two nodes.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128107829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Yamai, K. Okayama, T. Miyashita, Shin Maruyama, Motonori Nakamura
{"title":"A protection method against massive error mails caused by sender spoofed spam mails","authors":"N. Yamai, K. Okayama, T. Miyashita, Shin Maruyama, Motonori Nakamura","doi":"10.1109/SAINT.2005.7","DOIUrl":"https://doi.org/10.1109/SAINT.2005.7","url":null,"abstract":"Wide spread of spam mails is one of the most serious problems on e-mail environment. Particularly, spam mails with a spoofed sender address should not be left alone, since they make the mail server corresponding to the spoofed address be overloaded with massive error mails generated by the spam mails, and since they waste a lot of network and computer resources. In this paper, we propose a protection method of the mail server against such massive error mails. This method introduces an additional mail server that mainly deals with the error mails in order to reduce the load of the original mail server. This method also provide a function that refuses error mails to these two mail servers to save the network and computer resources.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134431661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hideo Kamada, K. Kinoshita, N. Yamai, H. Tode, K. Murakami
{"title":"An efficient agent control method for time constraint applications","authors":"Hideo Kamada, K. Kinoshita, N. Yamai, H. Tode, K. Murakami","doi":"10.1109/SAINT.2005.13","DOIUrl":"https://doi.org/10.1109/SAINT.2005.13","url":null,"abstract":"As one of the technologies for retrieval of desired contents over large scale networks, multiagent systems receive much attention. Since there are too many contents on the network to search all the contents exhaustively, some applications on multiagent systems have time constraint, that is, they have to obtain a result by the given deadline. In order to find a better result for such applications, it is important for the agents to complete their tasks on as many nodes as possible by the deadline. However, most existing agent systems using processor sharing as scheduling discipline do not take time constraint into account Therefore, agents are likely to miss their deadlines on many nodes. We propose an efficient agent dispatching method of time constraint applications. This method decides creation and migration of a clone agent according to the estimated value of the number of the agents that would have completed their tasks by the deadline. The results of the performance evaluation show the proposed method improves the number of agents that have completed their task.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116146225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spam filtering using spam mail communities","authors":"Deepak P, S. Parameswaran","doi":"10.1109/SAINT.2005.61","DOIUrl":"https://doi.org/10.1109/SAINT.2005.61","url":null,"abstract":"We might have heard quite a few people say on seeing some new mails in their inboxes, \"Oh! that spam again\". People who observe the kind of spam messages that they receive would perhaps be able to classify similar spam mails into communities. Such properties of spam messages can be used to filter spam. This paper describes an approach towards spam filtering that seeks to exploit the nature of spam messages that allow them to be classified into different communities. The working of a possible implementation of the approach is described in detail. The new approach does not base itself on any prejudices about spam and can be used to block nonspam nuisance mails also. It can also support users who would want selective blocking of spam mails based on their interests. The approach inherently is user-centric, flexible and user-friendly. The results of some tests done to check for the feasibility of such an approach have been evaluated as well.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114676872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Tarumi, Seiko Tokuda, T. Yasui, K. Matsubara, F. Kusunoki
{"title":"Design and evaluation of a location-based virtual city system for mobile phones","authors":"H. Tarumi, Seiko Tokuda, T. Yasui, K. Matsubara, F. Kusunoki","doi":"10.1109/SAINT.2005.20","DOIUrl":"https://doi.org/10.1109/SAINT.2005.20","url":null,"abstract":"We are developing a virtual city system with a model that consists of virtual architectural objects and virtual creatures, geographically overlaid onto the real world. People who have mobile terminals with location sensors like GPS can visit the virtual city when walking about in a real city. The most important aspect of our research is that we have adopted current market mobile phones. In this paper we describe a prototype of virtual city system and its evaluation. The result of evaluation suggests that subjects were very much interested in the virtual city system. Technical problems have been revealed by the evaluation but most of them will be solved or minimized if we use the next generation of mobile phones.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133189119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Annotating TV drama based on viewer dialogue - analysis of viewers' attention generated on an Internet bulletin board","authors":"Hiroshi Uehara, K. Yoshida","doi":"10.1109/SAINT.2005.14","DOIUrl":"https://doi.org/10.1109/SAINT.2005.14","url":null,"abstract":"The rapidly expanding capacity of local storage such as hard disk recorders is expected to create the need for a mechanism, which enables selective TV watching based on individual viewer's preference. We propose a method for creating the \"Attention Graph\", which depicts the amount of viewers' attention generated by TV drama. The Attention Graph, generated from the dialogues described in Internet communities concerning TV drama, is structured data mapped along the time line coincident with the progress of the drama's scenario. Thus, the Attention Graph assists in specifying noteworthy zones from complete TV programs and provides viewers with hints to watch selective scenes from their favorite TV drama. In general, dialogue from Internet communities is often expressed in poor grammatical manner, therefore natural language processing is difficult to apply. In an attempt to create the Attention Graph, we propose a statistical analysis of symbolic words to overcome this issue. The experimental results show that the Attention Graph successfully depicts the viewers' attention in TV drama, and indexes the zones of their attention.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132008501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using documents assessments to build communities of interests","authors":"P. Francq, A. Delchambre","doi":"10.1109/SAINT.2005.67","DOIUrl":"https://doi.org/10.1109/SAINT.2005.67","url":null,"abstract":"We present an approach, which uses users' assessments on documents to group them into communities of interests. An open source Internet/intranet prototype was developed for GNU/Linux in the framework of the GALILEI project. The users of the system are described in terms of profiles, with each profile corresponding to one area of interest. While browsing on a collection of documents, users' profiles are computed on the basis of both the content of the consulted documents and the assessments from the profiles. These profiles are then grouped into Internet communities, which allow the individuals to collaborate. Currently, documents of interest are shared between members of a same community. This approach was validated on several document collections according to a well-defined methodology and provides promising results.","PeriodicalId":169669,"journal":{"name":"The 2005 Symposium on Applications and the Internet","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132752802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}