{"title":"Parallel Multi-Layer neural network architecture with improved efficiency","authors":"David K. Hunter, B. Wilamowski","doi":"10.1109/HSI.2011.5937382","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937382","url":null,"abstract":"Neural network research over the past 3 decades has resulted in improved designs and more efficient training methods. In today's high-tech world, many complex non-linear systems described by dozens of differential equations are being replaced with powerful neural networks, making neural networks increasingly more important. However, all of the current designs, including the Multi-Layer Perceptron, the Bridged Multi-Layer Perceptron, and the Fully-Connected Cascade networks have a very large number of weights and connections, making them difficult to implement in hardware. The Parallel Multi-Layer Perceptron architecture introduced in this article yields the first neural network architecture that is practical to implement in hardware. This new architecture significantly reduces the number of connections and weights and eliminates the need for cross-layer connections. Results for this new architecture were tested on parity-N problems for values of N up to 17. Theoretical results show that this architecture yields valid results for all positive integer values of N.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132815372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motoyuki Ozeki, Yasuhiro Kashiwagi, Mariko Inoue, N. Oka
{"title":"Top-down visual attention control based on a particle filter for human-interactive robots","authors":"Motoyuki Ozeki, Yasuhiro Kashiwagi, Mariko Inoue, N. Oka","doi":"10.1109/HSI.2011.5937365","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937365","url":null,"abstract":"A novel visual attention model based on a particle filter is that also has a filter-type feature, (2) a compact model independent of the high-level processes, and (3) a unitary model that naturally integrates top-down modulation and bottom-up processes. These features allow the model to be applied simply to robots and to be easily understood by the developers. In this paper, we first briefly discuss human visual attention, computational models for bottom-up attention, and attentional metaphors. We then describe the proposed model and its top-down control interface. Finally, three experiments demonstrate the potential of the proposed model as an attentional metaphor and top-down attention control interface.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133336456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The cone of gaze","authors":"H. Hecht, Romy Weiland, Evgeniya Boyarskaya","doi":"10.1109/HSI.2011.5937396","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937396","url":null,"abstract":"Gaze direction is an important cue that regulates social interactions. Although humans are very accurate in determining gaze directions in general, they have a surprisingly liberal criterion for the presence of mutual gaze. We first established a psychophysical task to measure the cone of gaze, which required observers to adjust the eyes of a virtual head to the margins of the area of mutual gaze. Then we examined differences between 2D, 3D, and genuine real life gaze. Finally, the tolerance for image distortions when the virtual head is not viewed from the proper vantage point was investigated. Gaze direction was remarkably robust toward loss in detail and distortion. Important lessons for the design of eye-contact in virtual environments can be derived from these findings.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"29 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124139441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Walking guide interface mechanism and navigation system for the visually impaired","authors":"Atsushi Imadu, T. Kawai, Y. Takada, Tomoki Tajiri","doi":"10.1109/HSI.2011.5937339","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937339","url":null,"abstract":"We propose a new mechanism, with an intuitive interface, for a walking guidance system for the visually impaired. The user pushes the mechanism using a long handle like walking stick, and the mechanism steers to navigate along a given route using environmental sensors, communicating the steering angle to the user by twisting the handle. Using the interface, users are aware of their direction to go and can walk without hesitation. A navigation method incorporating simple topological map information is proposed for the mechanism, and a prototype system that follows the walls surrounding a route is developed. Experiments illustrate the usability and performance of the proposed interface and system.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125915823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utilization of machine learning methods for assembling, training and understanding autonomous robots","authors":"P. Hartono","doi":"10.1109/HSI.2011.5937399","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937399","url":null,"abstract":"For decades human society has been supported by the proliferation of complex artifacts such as electronic appliances, personal vehicles and mass transportation systems, electrical and communications grids, and in the past few decades, Internet. In the very near future, robots will play increasingly important roles in our daily life. The increase in complexity of the tasks and sometimes physical forms or morphologies of the artifacts consequently requires complex assembling and controlling procedures of them, which soon will be unmanageable by the traditional manufacturing process. The aim of this paper is to give a brief review on the potentials of the non-traditional assembling of complex artifacts, which in this study is symbolized by the creation of autonomous robots. Methods in self-assembling modular robots, real time learning of autonomous robots and a method for giving the comprehensive understanding, albeit intuitively, to human will be explained through some physical experiments.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130921044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a potential system for upper limb rehabilitation training based on virtual reality","authors":"Zhibin Song, Shuxiang Guo, Mohd Yazid","doi":"10.1109/HSI.2011.5937391","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937391","url":null,"abstract":"This paper proposed a novel rehabilitation system for rehabilitation training of the upper limbs for patients whose brain injured such as stroke. We also proposed some strategies of rehabilitation using this system based on Virtual Reality. In this paper, Virtual Reality (VR) was adapted in this system in which the water simulation provided fundamental environment for rehabilitation training and we used a haptic device (Phantom Omni) and an inertial sensor (MTx) to implement tasks proposed. In this paper, three types of tasks were designed so that comprehensive analyses of performance would be obtained. As preliminary phase, five healthy subjects were invited to participate in experiments. The experimental results showed that the virtual force model was effective for the upper limbs rehabilitation and the subjects showed improvement during the experiment. Though there is some limitation due to the haptic device, this system is promising, because strategies proposed are potential to be used in real rehabilitation.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kinematic and physiological cues for human system interaction","authors":"Kenji Suzuki, H. Kadone, A. Gruebler","doi":"10.1109/HSI.2011.5937402","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937402","url":null,"abstract":"In this paper, we introduce several wearable devices, which are able to measure not only posture and kinematics of human being but also physiological signals. Several different devices such as a head orientation sensing, reading facial expression, and breath sensing are presented with the potential applications to assistive technology, rehabilitation and entertainment. A cognitive neuroscience approach to realize the augmented human technology in order to enhance, strengthen, and support human's cognitive capabilities is also discussed.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127061675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of visual inspection robot based on motion and mind behaviors of expert inspector","authors":"T. Funahashi, T. Fujiwara, H. Koshimizu","doi":"10.1109/HSI.2011.5937361","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937361","url":null,"abstract":"In this research, we developed a visual inspection robot with human consciousness of attention based on body and sight motions. This robot could inspect cast metal object that is irregular both in illumination reflection and surface structure. We capture in advance images of body and sight motions of inspector performance by using 3D motion capture system and gaze analysis system, and analyzed the images for extracting expert skill through the consciousness of attention and concentration of inspection. Based on the consciousness model of inspection, we developed an inspection robot system with a smart mechanism of camera work and ROI control and finally dynamic image processing mechanism for rough detection and precise examination of defects.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125646196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An approach for extraction of human walking path in Intelligent Space","authors":"Hiromu Kobayashi, H. Hashimoto, M. Niitsuma","doi":"10.1109/HSI.2011.5937379","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937379","url":null,"abstract":"This paper presents an approach to extract human walking paths independently from the orientation of the paths in a global coordinate system. Previously, observing human walking, connectivity between the spaces (areas) has been obtained. In this paper, we regard human walking paths as a feature to represent patterns of activities. Observing and describing human activities can be considered as useful information for intelligent environments to enable the environments to provide suitable support to the users corresponding to their actual situations. In this paper, we present an approach to extract human walking paths independently from the orientation of the paths in a global coordinate system. More specifically, we propose a similarity measurement based on AMSS (Angular Metrics for Shape Similarity), then classify human walking paths using a hierarchical clustering method. Experimental results show that the proposed approach achieves rotation invariant extraction of human walking paths.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130507206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A talking robot and its human-like expressive speech production","authors":"M. Kitani, Tatsuya Hara, H. Hanada, H. Sawada","doi":"10.1109/HSI.2011.5937367","DOIUrl":"https://doi.org/10.1109/HSI.2011.5937367","url":null,"abstract":"A talking robot which adaptively learns the vocalization skill by an auditory feedback learning is being developed. The talking robot is constructed by the mechanical vocal systems, which respectively correspond to human organs. In vocalization, the vibration of vocal cords generates a source sound, and then the sound wave is led to a vocal tract, which works as a resonance filter to determine the spectrum envelope. The talking robot autonomously learns the vocalization by applying auditory feedback. In this study, a human-like expressive speech production by the talking robot is introduced. A human generates speech by controlling their own vocal organs for changing the speech expressions such as the volume and the intonations. To realize the human-like expressive speech production, a control method for the mechanical organs has to be established. In the first part of the paper, the construction of the talking robot and the acquisition of the vocalization will be described, and then the control method for producing human-like speech with various expressions will be introduced.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129060610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}