Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems最新文献

筛选
英文 中文
Adaptive predetection and off-board track and decision level fusion for enhanced surveillance 自适应预检测和船外轨道与决策级融合增强监视
I. Kadar
{"title":"Adaptive predetection and off-board track and decision level fusion for enhanced surveillance","authors":"I. Kadar","doi":"10.1109/MFI.1994.398424","DOIUrl":"https://doi.org/10.1109/MFI.1994.398424","url":null,"abstract":"A perceptual reasoning system adaptively extracting, associating and fusing information from multiple sources, at various levels of abstraction, is considered as the building block for the next generation of surveillance systems. A system architecture is presented which makes use of both centralized and distributed predetection fusion combined with intelligent monitor and control coupling both on-platform and off-board track and decision level fusion results. The goal of this system is to create a \"gestalt fused sensor system\" whose information product is greater than the sum of the information products from the individual performance superior to either individual or sub-group of combined sensors. The application of this architectural concept to the law enforcement arena utilizing multiple spatially and temporally diverse surveillance platforms and/or information sources can be used to illustrate the benefits of the adaptive perceptual reasoning system concept.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124724644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Object recognition using a distributed cooperative vision system 基于分布式协同视觉系统的目标识别
T. Hamada, K. Kamejima
{"title":"Object recognition using a distributed cooperative vision system","authors":"T. Hamada, K. Kamejima","doi":"10.1109/MFI.1994.398387","DOIUrl":"https://doi.org/10.1109/MFI.1994.398387","url":null,"abstract":"Describes a distributed cooperative method whereby a complicated recognition process is divided into a function based independent recognition algorithm. Each algorithm can determine a certain unknown factor of the object under the condition that some remaining unknown factors are fixed. By introducing a cooperation mechanism, these independent algorithms can be used concurrently to determine all of the unknown factors. This cooperation mechanism is obtained by designing each algorithm as a recursive process which tries to reduce differences between predictive features obtained from a model and the features sensed. This approach results in a vision system that is easily adapted to many applications. A prototype vision system which determines the shape, location, and orientation of objects is developed. Using this system, it is experimentally verified that the proposed method is effective.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121743563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An algebraic framework for using geometric constraints of vision and range sensor data 利用视觉和距离传感器数据的几何约束的代数框架
K. Deguchi
{"title":"An algebraic framework for using geometric constraints of vision and range sensor data","authors":"K. Deguchi","doi":"10.1109/MFI.1994.398436","DOIUrl":"https://doi.org/10.1109/MFI.1994.398436","url":null,"abstract":"Proposes a new framework for fusing multiple geometric sensor outputs to reconstruct 3-dimensional target shapes. The proposed framework is of an application of Wu's mechanical theorem proving method in algebraic geometry. First the author lists three groups of equations on the constraints. Next, the author classifies all the groups of equations into two sets, a set of hypotheses and a conjecture. Then, the author applies Wu's method to prove that the hypotheses follow the conjecture and obtain pseudo-divided remainders of the conjectures, which represent new relations of geometric measures such as angles or lengths between 3-D space and their projected data on the sensors. As an example, a typical case is considered where an image sensor and a range sensor are used together to reconstruct and recognize 3-D object shapes. By this method the author obtained new geometrical relations for seven cases of geometrical models.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Auto target detection for robot hand manipulation using an anthropomorphic active vision system 基于拟人主动视觉系统的机械手自动目标检测
T. Yagi, N. Asano, S. Makita, Y. Uchikawa
{"title":"Auto target detection for robot hand manipulation using an anthropomorphic active vision system","authors":"T. Yagi, N. Asano, S. Makita, Y. Uchikawa","doi":"10.1109/MFI.1994.398396","DOIUrl":"https://doi.org/10.1109/MFI.1994.398396","url":null,"abstract":"In this paper, we propose a very simple method to detect an object from a camera-taken 2-D image when many objects are presented in a scene. The advantage of this method is its simplicity for use. In this method, a camera-taken image is blurred with high spatial resolution near the optical axis of the camera, and with lower resolution in the periphery. Such inhomogeneous image processing, which resembles to a primate's vision, is very effective to determine the position of each object in a scene one after another. This method is implemented into our proposed active vision system which send the position data of each target object to a robot hand controller in order to assist hand manipulation. In experiments, we show that the position of each object is detected quite easily and precisely enough for robot hand manipulation.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116793887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An autonomous fuzzy logic architecture for multisensor data fusion 多传感器数据融合的自主模糊逻辑体系结构
R. E. Gibson, D. Hall, J. Stover
{"title":"An autonomous fuzzy logic architecture for multisensor data fusion","authors":"R. E. Gibson, D. Hall, J. Stover","doi":"10.1109/MFI.1994.398450","DOIUrl":"https://doi.org/10.1109/MFI.1994.398450","url":null,"abstract":"Fuzzy logic techniques have become popular to address various processes for multisensor data fusion. Examples include: (1) fuzzy membership functions for data association, (2) evaluation of alternative hypotheses in multiple hypothesis trackers, (3) fuzzy-logic-based pattern recognition (e.g., for feature-based object identification), and (4) fuzzy inference schemes for sensor resource allocation. These approaches have been individually successful but are limited to only a single subprocess within a data fusion system. At The Pennsylvania State University, Applied Research Laboratory, a general-purpose fuzzy logic architecture has been developed that provides for control of sensing resources, fusion of data for tracking, automatic object recognition, control of system resources and elements, and automated situation assessment. This general architecture has been applied to implement an autonomous vehicle capable of self-direction, obstacle avoidance, and mission completion. The fuzzy logic architecture provides interpretation and fusion of multisensor data (i.e., perception) as well as logic for process control (action). This paper provides an overview of the fuzzy logic architecture and a discussion of its application to data fusion in the context of the Department of Defense (DoD) Joint Directors of Laboratories (JDL) Data Fusion Process Model. A new, robust, fuzzy calculus is introduced. An example is provided by modeling a component of the perception processing of a bat.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Simple local partition rules in multi-bit decision fusion 多比特决策融合中的简单局部划分规则
M. Kam, Xiaoxun Zhu
{"title":"Simple local partition rules in multi-bit decision fusion","authors":"M. Kam, Xiaoxun Zhu","doi":"10.1109/MFI.1994.398456","DOIUrl":"https://doi.org/10.1109/MFI.1994.398456","url":null,"abstract":"A parallel decision fusion system is studied where local detectors (LDs) collect information about a binary hypothesis, and transmit multi-bit intermediate decisions to a data fusion center (DFC). The DFC compresses the local decisions into a final binary decision. The objective function is the Bayesian risk. Equations for the optimal decision rules for the LDs and the DFC have been derived by Lee-Chao (1989), but the computational complexity of solving them is formidable. To address this difficulty, we propose several suboptimal LD-design schemes. For each one we design a DFC, which is optimally conditioned on the fixed LD rules. We calculate the exact performance of each scheme, thus providing a means for selection of the most appropriate one under given observation conditions. We demonstrate performance for two important binary decision tasks: discrimination between two Gaussian hypotheses of equal variances and different means; and discrimination between two Gaussian hypotheses of equal means and different variances.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131180813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic configuration of mobile robot perceptual system 移动机器人感知系统的动态配置
V. Berge-Cherfaoui, B. Vachon
{"title":"Dynamic configuration of mobile robot perceptual system","authors":"V. Berge-Cherfaoui, B. Vachon","doi":"10.1109/MFI.1994.398385","DOIUrl":"https://doi.org/10.1109/MFI.1994.398385","url":null,"abstract":"This paper describes SEPIA, a system that integrates reactive behavior in a control architecture allowing the configuration of the perceptual system according to execution context, robot objectives and sensor performances. SEPIA is based on multi-agent system concepts where the perception, command and planning functionalities are modeled by agents. After the presentation of the architecture based on a blackboard idea, the authors define the agent structure and model. This model is necessary for an efficient collaboration between agents using the valuation of the performances according to its ability and the context. The selective references are based on result quality and response time t. The dynamic reconfiguration is made possible by bids mechanism associated with selective rules on reference criteria. Finally the implementation of SEPIA is described and the results of experimentation on a real mobile robot are discussed.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131564655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visual control of grasping and manipulation tasks 抓取和操作任务的视觉控制
B. Yoshimi, P. Allen
{"title":"Visual control of grasping and manipulation tasks","authors":"B. Yoshimi, P. Allen","doi":"10.1109/MFI.1994.398402","DOIUrl":"https://doi.org/10.1109/MFI.1994.398402","url":null,"abstract":"This paper discusses the problem of visual control of grasping. We have implemented an object tracking system that can be used to provide visual feedback for locating the positions of fingers and objects to be manipulated, as well as the relative relationships of them. This visual analysis can be used to control open loop grasping systems in a number of manipulation tasks where the finger contact, object movement, and task completion need to be monitored and controlled.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122429760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Fusion techbroad area surveillance exploiting ambient signals via coherent techiques 融合技术通过相干技术利用环境信号进行广域监视
R. Ogrodnik
{"title":"Fusion techbroad area surveillance exploiting ambient signals via coherent techiques","authors":"R. Ogrodnik","doi":"10.1109/MFI.1994.398423","DOIUrl":"https://doi.org/10.1109/MFI.1994.398423","url":null,"abstract":"The exploitation of ambient signals as noncooperative, distributed sources of illumination supports the synthesis of a broad area surveillance architecture which draws from fusion based technology and can create a highly powerful, yet silent (passive) remote target monitoring capability. The resulting surveillance capability can selectively optimize its functions to either enhance small signal target detection (improve noise rejection) or provide real time target typing based on the nature of the exploited ambient signal. This paper addresses this remote surveillance synthesizing technology from its basic principles as well as from field test aspects, to provide the systems community with information on a wholly different approach to target surveillance and target typing which inherently has the potential of simultaneous multiple operating modes at its disposal, simply by exploiting the ambient signal space in a coherent signal processing manner.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122917827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Action-oriented sensor data integration and its application to control of an autonomous vehicle 面向动作的传感器数据集成及其在自动驾驶汽车控制中的应用
M. Niizuma, M. Tomizawa, Y. Kawano, M. Sugiyama, T. Oikawa, S. Misono, S. Degawa
{"title":"Action-oriented sensor data integration and its application to control of an autonomous vehicle","authors":"M. Niizuma, M. Tomizawa, Y. Kawano, M. Sugiyama, T. Oikawa, S. Misono, S. Degawa","doi":"10.1109/MFI.1994.398460","DOIUrl":"https://doi.org/10.1109/MFI.1994.398460","url":null,"abstract":"We describe the control system structure of our test autonomous vehicle for indoor movement. The control system consists of several independent \"sub-controllers\". Each sub-controller has its own map and planner and is tuned for one particular environment. As the vehicle moves around and encounters a different environment, a different sub-controller becomes active and plans vehicle's action. The data processing in a sub-controller becomes simple, as it copes only with one particular kind of environment. This approach enables the machine to cope with various environments by adding different sub-controllers.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127724270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信