Naoto Mizumura, H. Saito, Junji Takahashi, Y. Tobe
{"title":"Towards Optimizing Time of File Transfer Among Multiple Smartphones using Wi-Fi Direct","authors":"Naoto Mizumura, H. Saito, Junji Takahashi, Y. Tobe","doi":"10.1145/3004010.3004052","DOIUrl":"https://doi.org/10.1145/3004010.3004052","url":null,"abstract":"The proliferation of smartphones has changed our style of handling personal information and documents. They used to be stored in personal computers, but now they are becoming to be managed in the smartphone. Since this trend accelerates sharing information among many people using smartphones, efficient file transfer among smartphones is important. When a file is transferred from a smartphone to multiple other smartphones using Wi-Fi Direct, making a copy from the source to the rest of all smartphones does not necessarily provide the shortest transfer time because some link may suffer from weak wireless connection. Based on the background, we design FTroid, a system to provide a near-optimal file transfer among multiple smartphones. In this paper we describe how routers are determined in FTroid and its preliminary evaluation.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. M. Trono, Manato Fujimoto, H. Suwa, Yutaka Arakawa, K. Yasumoto
{"title":"Milk Carton: Family Tracing and Reunification system using Face Recognition over a DTN with Deployed Computing Nodes","authors":"E. M. Trono, Manato Fujimoto, H. Suwa, Yutaka Arakawa, K. Yasumoto","doi":"10.1145/3004010.3006380","DOIUrl":"https://doi.org/10.1145/3004010.3006380","url":null,"abstract":"During the recovery period after disasters, Family Tracing and Reunification (FTR) is the process by which separated family members are reunited. Traditional FTR methods rely on paper-based registries and notice boards, which cannot automatically match missing person queries with existing records and cannot be efficiently disseminated. Furthermore, lost children or people with disabilities may not be capable of supplying the text-based personal information required by registry forms. Finally, current digital FTR systems require the Internet for data delivery and storage, which may be unavailable during a disaster scenario. To overcome these limitations, we propose the Milk Carton FTR system. Milk Carton uses the Eigenfaces face recognition algorithm to automatically match missing person queries with existing records, without requiring text-based personal information. The system uses Computing Nodes, commodity devices deployed in evacuation centers, to handle record and query creation, data storage, and matching. To handle data delivery without the Internet, Milk Carton leverages response team vehicles as data ferries. The ferries store, carry, and forward records and queries across the system. In this study, we present the design of the Milk Carton system and initial performance evaluations.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eri Nakagawa, K. Moriya, H. Suwa, Manato Fujimoto, Yutaka Arakawa, Toshiyuki Hatta, S. Miwa, K. Yasumoto
{"title":"Investigating recognition accuracy improvement by adding user's acceleration data to location and power consumption-based in-home activity recognition system","authors":"Eri Nakagawa, K. Moriya, H. Suwa, Manato Fujimoto, Yutaka Arakawa, Toshiyuki Hatta, S. Miwa, K. Yasumoto","doi":"10.1145/3004010.3004036","DOIUrl":"https://doi.org/10.1145/3004010.3004036","url":null,"abstract":"Recently, there are many studies on automatic recognition of activities of daily living (ADL) to provide various services such as elderly monitoring, intelligent concierge, and health support. In particular, real-time ADL recognition is essential to realize an intelligent concierge service since the service needs to know user's current or next activity for supporting it. We have been studying real-time ADL recognition using only user's position data and appliances' power consumption data which are considered to include less privacy information than audio and visual data. In the study, we found that some activities such as reading and operating smartphone that happen in similar conditions cannot be classified with only position and power data. In this paper, we propose a new method that adds the acceleration data from wearable devices for classifying activities happening in similar conditions with higher accuracy. In the proposed method, we use the acceleration data from a smart watch and a smartphone worn by user's arm and waist, respectively, in addition to user's position data and appliances' power consumption data, and construct a machine learning model for recognizing 15 types of target activities. We evaluated the recognition accuracy of 3 methods: our previous method (using only position data and power consumption data); the proposed method using the mean value and the standard deviation of the acceleration norm; and the proposed method using the ratio of the activity topics. We collected the sensor data in our smart home facility for 12 days, and applied the proposed method to these sensor data. As a result, the proposed method could recognize the activities with 57% which is 12 % improvement from our previous method without acceleration data.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123629265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","authors":"T. Hara, H. Shigeno","doi":"10.1145/3004010","DOIUrl":"https://doi.org/10.1145/3004010","url":null,"abstract":"","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"8 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121022007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic SLAM: a Review from Fog Computing and Mobile Edge Computing Perspective","authors":"Swarnava Dey, A. Mukherjee","doi":"10.1145/3004010.3004032","DOIUrl":"https://doi.org/10.1145/3004010.3004032","url":null,"abstract":"Offloading computationally expensive Simultaneous Localization and Mapping (SLAM) task for mobile robots have attracted significant attention during the last few years. Lack of powerful on-board compute capability in these energy constrained mobile robots and rapid advancement in compute cloud access technologies laid the foundation for development of several Cloud Robotics platforms that enabled parallel execution of computationally expensive robotic algorithms, especially involving multiple robots. In this work the Cloud Robotics concept is extended to include the current emphasis of computing at the network edge nodes along with the Cloud. The requirements and advantages of using edge nodes for computation offloading over remote cloud or local robot clusters are discussed with reference to the ETSI 'Mobile-Edge Computing' initiative and OpenFog Consortium's 'OpenFog Architecture'. A Particle Filter algorithm for SLAM is modified and implemented for offloading in a multi-tier edge+cloud setup. Additionally a model is proposed for offloading decision in such a setup with experiments and results demonstrating the efficacy of the proposed dynamic offloading scheme over static offloading strategies.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129627329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tatsuya Iizuka, Yoshiaki Narusue, Y. Kawahara, T. Asami
{"title":"A Planning Simulation Tool for Energy Harvesting Applications","authors":"Tatsuya Iizuka, Yoshiaki Narusue, Y. Kawahara, T. Asami","doi":"10.1145/3004010.3004043","DOIUrl":"https://doi.org/10.1145/3004010.3004043","url":null,"abstract":"Energy harvesting (EH) is a key enabling technology for the autonomous operation of smart devices. However, designing a system based on energy harvesting is much more difficult than designing battery-powered systems. The system has to weigh the balance between harvested energy and consumed energy, while taking into account various uncertainties in the actual environment. Achieving this requirement necessitates the precise refinement and integration of hardware modules and software structures; however, it remains difficult for non-experts to design such a system from scratch. In this paper, we propose PLEH, a PLanning simulation tool for Energy Harvesting applications, which is expected to facilitate determining the most relevant hardware modules and the software structure. PLEH estimates the variation in the energy consumption over time based on our abstract EH application model. We demonstrate three usage scenarios that are common to novice developers of EH applications.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127933360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In-home Activity and Micro-motion Logging Using Mobile Robot with Kinect","authors":"Keita Nakahara, H. Yamaguchi, T. Higashino","doi":"10.1145/3004010.3004027","DOIUrl":"https://doi.org/10.1145/3004010.3004027","url":null,"abstract":"In this paper, we propose a method for logging micro-motion of in-home daily activity based on the skeleton recognition of the elderly in their daily life. We believe that in near future, many types of mobile robots will be spread to general household, and our idea is to let such a home robot be equipped with a 3D-depth camera such as Microsoft Kinect to enable tracking and observation of the elderly people at any location, any time, from any angle at home. There are lots of furniture and other items at home, which often make hard to set fixed-point observation, but robots are flexible to move to the best position to acquire the motion logging. The collected micro-motion data can be used for early detection of mild cognitive impairment (MCI) or depression, both of which often affect the physical body ability. Our robot moves in the vicinity of the elderly and performs a joint detection from 3D depth information. Through the experiment in the real home, we could recognize the in-home activities and micro-motions with high accuracy.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129427364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative Video Data Transmission for Sewer Inspection Using Multiple Drifting Cameras","authors":"Yudai Tanaka, S. Ishihara","doi":"10.1145/3004010.3004034","DOIUrl":"https://doi.org/10.1145/3004010.3004034","url":null,"abstract":"Since aged sewer pipes cause severe problems such as sewer leaks and cave-in accidents, inspection of sewer systems is very important. Especially, in Japan, the greater part of sewer systems will reach their life limit in coming 10 years, many local governments have been facing a problem of high monetary and labor cost of sewer inspection and maintenance. To solve the problem, monetary and labor cost-effective schemes for sewer inspection are urgently needed. We have proposed a basic design of a sewer inspection system using multiple wireless drifting camera/sensor nodes. The system does not require any manual operations in a pipe. Thus, the labor cost and operation time can be reduced. In this system, monitoring nodes record the video of the inside of a sewer pipe when they drift in the pipe and then send the recorded video data to multiple access points (APs) placed at manholes. The wireless communication distance between the nodes and each AP is limited due to the severe communication condition in pipes. This makes it difficult to send large size video data from camera nodes to APs. In this paper, we propose a collaborative video transmission scheme in which each node is assigned to take a video of a section of a segment between APs. The scheme makes the size of video data that each node has to send to an AP during its short communication period small. Thus, it enables to reduce the number of APs to collect video and to use the system for pipes with a long interval of manholes. We discuss the detailed design of the scheme and the design of the prototype of the scheme.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"1013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116249283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Moving Objects Using Optical Flow with a Moving Stereo Camera","authors":"Tetsuro Toda, Gakuto Masuyama, K. Umeda","doi":"10.1145/3004010.3004016","DOIUrl":"https://doi.org/10.1145/3004010.3004016","url":null,"abstract":"The detection of moving objects has become an important field of research for mobile robots. It is difficult to detect moving objects from a moving camera because moving objects and the background can appear to move. This paper proposes a method for detecting moving objects using a moving stereo camera. First, the camera motion parameters are estimated by using optical flow with a stereo camera. Second, the optical flow occurring in the background is removed. Finally, moving objects are detected individually by labeling the remaining optical flow. The proposed method has been evaluated through experiments using two pedestrians in an indoor environment.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131065938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"pakapaka Method For BLE Beacon Identification","authors":"S. Ikeda, K. Kaji","doi":"10.1145/3004010.3004049","DOIUrl":"https://doi.org/10.1145/3004010.3004049","url":null,"abstract":"In this study, the repeated covering and uncovering by the hands of a beacon before one's eyes is performed, and the identification of the corresponding beacon is obtained based on the changes in the received signal strength indicator at the control terminal. Recently, beacons that employ Bluetooth Low Energy (BLE) have spread, and are used for a variety of purposes. However, when one wishes to control several beacons, there are no clear means for readily identifying each beacon. In the proposed method, the changes in the received signal strength indicator of BLE are used to identify the BLE beacon. The output radio signals of BLE are weak, and almost all BLE beacons are small enough to fit in one's hand. If the beacon is covered with both hands, the signals of BLE are attenuated. In this study, the algorithms for beacon identification were examined, and control applications were prepared. In addition, it was possible to identify within 13 s the target beacon among several beacons.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132629489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}