M. Nishigaki, M. Saka, T. Aoki, H. Yuhara, M. Kawai
{"title":"Fail output algorithm of vision sensing","authors":"M. Nishigaki, M. Saka, T. Aoki, H. Yuhara, M. Kawai","doi":"10.1109/IVS.2000.898410","DOIUrl":"https://doi.org/10.1109/IVS.2000.898410","url":null,"abstract":"Vision sensor has become a recent adoption as a potentially effective means for external sensing for automobiles. In the field of external sensing, a number of options for lane detection, distance recognition and other parameters have been proposed. Among others, the most important and yet difficult challenge is to establish a failure detection method for each recognition algorithm in order to prevent an error in system's detection under actual conditions where the sensor is used. This paper reports on one version for detecting failure in the vision sensor that recognizes obstacles in front of the vehicle in motion. The study was based on an innovative algorithm that judges the limit of vehicle recognition using a stereo vision camera under adverse weather conditions, in particular in the rain.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121297592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time recognition of corridor under varying lighting conditions for autonomous vehicle","authors":"M. Minami, J. Agbanhan, H. Suzuki, T. Asakura","doi":"10.1109/IVS.2000.898362","DOIUrl":"https://doi.org/10.1109/IVS.2000.898362","url":null,"abstract":"Recognition of a working environment is critical for an autonomous vehicle such as a mobile robot to confirm its possible intelligence. Therefore it is necessary to equip a recognition system with some sensor which can get environmental information. As an effective sensor, a CCD camera is generally thought to be useful for all kinds of mobile robots. However, it is thought to be hard to use the CCD camera for visual feedback which require to acquire the information in real-time. This research presents a corridor recognition method using the unprocessed gray-scale image, termed here as raw-image, and a genetic algorithm (GA), without any image information conversion, so as to perform the recognition process in real-time. The robustness of the method against noises in the environment, and the effectiveness of the method for real-time recognition have been verified using real corridor images.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114075580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TV camera-based vehicle motion detection and its chip implementation","authors":"Y. Fang, M. Mizuki, I. Masaki, B. Horn","doi":"10.1109/IVS.2000.898331","DOIUrl":"https://doi.org/10.1109/IVS.2000.898331","url":null,"abstract":"Detecting motion of vehicles and other objects by using TV cameras has wide applications in video image compression (MPEG 2) and intelligent transportation systems. In this paper, we present an edge-based motion detection to overcome the heavy computational load of the conventional motion detection algorithm. The algorithm successfully decreases both the computational load and the area of chip implementation by factors of 4.4 and 8, respectively. The result from the chip illustrates the effectiveness of the algorithm.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lachlan B. Michael, Shiro Kikuchi, Tomoko Adachi, Masao Nakagawa
{"title":"Combined cellular/direct method of inter-vehicle communication","authors":"Lachlan B. Michael, Shiro Kikuchi, Tomoko Adachi, Masao Nakagawa","doi":"10.1109/IVS.2000.898400","DOIUrl":"https://doi.org/10.1109/IVS.2000.898400","url":null,"abstract":"The use of inter-vehicle communication (IVC) is considered as an integral part of future intelligent transportation systems. However, most research has focused on communications for the purpose of controlling vehicles to improve safety. Along with this kind of communications, direct high-speed links between specific vehicles should also be considered. For point-to-point communication links between vehicles, communication is difficult when vehicles are far apart. Use of the cellular network (such as IMT-2000) for inter-vehicle communication is possible, but the speed of the link is limited to 144-384 kbps. Furthermore, the cost of continuous communication is extremely high. A combined cellular/direct link would provide a high-quality link, while at the same time reduce the transmission costs. In this paper, the hand-off rate of such a system is examined and a hysteresis factor is proposed to reduce the number of hand-offs when switching between the direct link and the cellular system.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Abreu, L. Botelho, A. Cavallaro, D. Douxchamps, T. Ebrahimi, Pedro Figueiredo, B. Macq, B. Mory, Luís Nunes, J. Orri, M. J. Trigueiros, A. Violante
{"title":"Video-based multi-agent traffic surveillance system","authors":"B. Abreu, L. Botelho, A. Cavallaro, D. Douxchamps, T. Ebrahimi, Pedro Figueiredo, B. Macq, B. Mory, Luís Nunes, J. Orri, M. J. Trigueiros, A. Violante","doi":"10.1109/IVS.2000.898385","DOIUrl":"https://doi.org/10.1109/IVS.2000.898385","url":null,"abstract":"This paper describes Monitorix, a video-based traffic surveillance multi-agent system. Monitorix agents are grouped in four tiers, according to the kind of information processing they perform: the sensors and effectors tier, the objective description tier, the application assistant tier, and the user assistant tier. The video analysis algorithms use an adaptive, data-driven, application independent approach to extract features from the video raw data. In spite of the diversity of agent tasks, adaptive learning algorithms are used in most cases. The integration of video analysis algorithms and agent technology is made via a special middle agent called Proxy. Monitorix is a fully decentralised multi-agent system living in a FIPA Platform and using FIPA Agent Communication Language. Tracking of vehicles across nonoverlapping cameras is performed by the Tracker agent, using a traffic model and learning algorithms that tune the model parameters.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115900957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic processing of a car driver eye scanning movements on a simulated highway driving context","authors":"J. Popieul, P. Simon, R. Leroux, J. Angué","doi":"10.1109/IVS.2000.898311","DOIUrl":"https://doi.org/10.1109/IVS.2000.898311","url":null,"abstract":"This paper deals with the description of a system performing an automatic data extraction from the visual activity recordings of a driver in a simulated driving context. Main modules of this system are described with their related principles and problems. From the hardware point of view there is the driving simulator, the eye tracker and the developed synchronization device. From the software one, there is the numerical raw data filtering process, the ray tracing module and the decision software. The validation of this system puts forward its good capacity to reproduce the manual data extraction of an expert operator from video recordings. Its first application to a long duration driving experiment points out the large amount of time spared in data extraction thanks to this system.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122100403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hillebrand, N. Stevanovic, B. Hosticka, J. Santos Conde, A. Teuner, M. Schwarz
{"title":"High speed camera system using a CMOS image sensor","authors":"M. Hillebrand, N. Stevanovic, B. Hosticka, J. Santos Conde, A. Teuner, M. Schwarz","doi":"10.1109/IVS.2000.898423","DOIUrl":"https://doi.org/10.1109/IVS.2000.898423","url":null,"abstract":"In this paper a new camera system for high speed imaging is presented, which is capable of recording images with a resolution of 256/spl times/256 pixels and frame rates in excess of 1000 frames per second. It uses an image sensor with on-chip electronic shutter and has been fabricated in standard 1 /spl mu/m standard CMOS process. The camera system contains an image memory for sequence recording. The camera delivers a very good image quality without any external algorithm for image enhancement and provides a very fast interface between the image acquisition and image processing unit. The CMOS imagers also have the ability to acquire images in a very short period. This allows an adaptation of the camera to various automotive applications like occupancy detection, airbag control, pre-crash sensing, collision avoidance, surveillance, and crash test observation. Moreover, the system architecture makes a combination of several applications possible using just a single image sensor unit.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125935494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a 2D environment map from laser range-finder data","authors":"R. Mázl, L. Preucil","doi":"10.1109/IVS.2000.898357","DOIUrl":"https://doi.org/10.1109/IVS.2000.898357","url":null,"abstract":"The contribution describes a robust but simple approach to 2D environment mapping making-use of a TOF-based laser ranging system. As the used mobile system has no absolute positioning system the task has been split into two parts. The first deals with preprocessing of odometry and range measurements towards determination of position and heading of the platform in the environment. The other step maintains (updates) the internal world model via search for correspondences between existing map and observed entities with subsequent application of renewal rules.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128956496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A proposal of HIR (human-oriented image restructuring) system for ITS","authors":"K. Toyota, T. Fujii, T. Kimoto, M. Tanimoto","doi":"10.1109/IVS.2000.898403","DOIUrl":"https://doi.org/10.1109/IVS.2000.898403","url":null,"abstract":"We discuss the problems of AHS and propose a new HIR system that aims to solve the problems of AHS. HIR is a human-assisting system in which we integrate and restructure images to make them helpful. The main feature of the proposed system is that it is human, not the mechanical cars, that recognizes the situation and controls the car; and for that purpose, we only generate and show easy-to-understand images for human. The process is first to integrate and restructure numerous camera images, such as cars and roads, from all driving environment and non-image information like the vehicle information and communication systems information. It then selects the most important information according to the situation and shows it in the form of \"image\". We also present examples of the proposed driver-assisting images in several situations to demonstrate the effectiveness of the integrated and restructured images.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130542299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recognition of 3D compressed images and its traffic monitoring applications","authors":"Shana Johnson, Hassanah Lloyd, Salimah Lloyd, Tremayne Phillips","doi":"10.1109/IVS.2000.898386","DOIUrl":"https://doi.org/10.1109/IVS.2000.898386","url":null,"abstract":"In a digital image network for traffic monitoring a large number of cameras are connected to control centers through a hierarchical network. Compressed image data and recognition results are transmitted over the network. With conventional approaches, each control center receives compressed image data along with preliminary recognition results from low level control centers or surveillance cameras. Each center needs to decompress image data for further recognition processing, and if necessary the center sends the compressed image data and recognition results to the upper-level control center. In order to increase the cost-efficiency of the digital image network, we propose eliminating the decompression required at each center by developing a recognition method which works in the compressed domain. The main stream of conventional image compression methods such as discrete cosine transform is based on spatial frequency which makes it difficult to carry out recognition processes in the compressed domain. In contrast, we will compress the image data by using attributes which are relevant both for compression and recognition. Examples of the common attributes are binary edge locations and the color information surrounding the edge. This and other information is retained in the compression domain to enable recognition without decompression.","PeriodicalId":114981,"journal":{"name":"Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130954761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}