Haifeng Zhao, J. Zhou, A. Robles-Kelly, Jianfeng Lu, Jing-yu Yang
{"title":"Automatic Detection of Defective Zebrafish Embryos via Shape Analysis","authors":"Haifeng Zhao, J. Zhou, A. Robles-Kelly, Jianfeng Lu, Jing-yu Yang","doi":"10.1109/DICTA.2009.76","DOIUrl":"https://doi.org/10.1109/DICTA.2009.76","url":null,"abstract":"In this paper, we present a graph-based approach to automatically detect defective zebrafish embryos. Here, the zebrafish is segmented from the background using a texture descriptor and morphological operations. In this way, we can represent the embryo shape as a graph, for which we propose a vectorisation method to recover clique histogram vectors for classification. The clique histogram represents the distribution of one vertex with respect to its adjacent vertices. This treatment permits the use of a codebook approach to represent the graph in terms of a set of codewords that can be used for purposes of support vector machine classification. The experimental results show that the method is not only effective but also robust to occlusions and shape variations. represent the embryo shape as a graph, for which we propose a vectorisation method to recover clique histogram vectors for classification. The clique histogram represents the distribution of one vertex with respect to its adjacent vertices. This treatment permits the use of a codebook approach to represent the graph in terms of a set of codewords that can be used for purposes of support vector machine classification. The experimental results show that the method is not only effective but also robust to occlusions and shape variations.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129957491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Hill, Christopher S. Madden, A. Hengel, Henry Detmold, A. Dick
{"title":"Measuring Latency for Video Surveillance Systems","authors":"R. Hill, Christopher S. Madden, A. Hengel, Henry Detmold, A. Dick","doi":"10.1109/DICTA.2009.23","DOIUrl":"https://doi.org/10.1109/DICTA.2009.23","url":null,"abstract":"The increased flexibility and other benefits offered by IP network cameras makes them a common choice for installation in new and expanded surveillance networks. One commonly quoted limitation of IP cameras is their high latency when compared to their analogue counterparts. This causes some reluctance to install or upgrade to digital cameras, and is slowing the adoption of live, intelligent analysis techniques in video surveillance systems. This paper presents methods for measurement of the latency in systems based upon digital IP or analogue cameras. These methods are camera-agnostic and require no specialised hardware. We use these methods to compare a variety of camera models. The results demonstrate that whilst analogue cameras do have a lower latency, most IP cameras are within acceptable tolerances. The source of the latency within an IP camera is also analysed, with prospects for improvement identified.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125474805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Reconstruction of Patient Specific Bone Models from 2D Radiographs for Image Guided Orthopedic Surgery","authors":"P. Gamage, S. Xie, P. Delmas, P. Xu","doi":"10.1109/DICTA.2009.42","DOIUrl":"https://doi.org/10.1109/DICTA.2009.42","url":null,"abstract":"Three dimensional (3D) visualization of anatomy plays an important role in image guided orthopedic surgery and ultimately motivates minimally invasive procedures. However, direct 3D imaging modalities such as Computed Tomography (CT) are restricted to a minority of complex orthopedic procedures. Thus the diagnostics and planning of many interventions still rely on two dimensional (2D) radiographic images, where the surgeon has to mentally visualize the anatomy of interest. The purpose of this paper is to apply and validate a bi-planar 3D reconstruction methodology driven by prominent bony anatomy edges and contours identified on orthogonal radiographs. The results obtained through the proposed methodology are benchmarked against 3D CT scan data to assess the accuracy of reconstruction. The human femur has been used as the anatomy of interest throughout the paper. The novelty of this methodology is that it not only involves the outer contours of the bony anatomy in the reconstruction but also several key interior edges identifiable on radiographic images. Hence, this framework is not simply limited to long bones, but is generally applicable to a multitude of other bony anatomies as illustrated in the results section.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114506456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense Correspondence Extraction in Difficult Uncalibrated Scenarios","authors":"R. Lakemond, C. Fookes, S. Sridharan","doi":"10.1109/DICTA.2009.19","DOIUrl":"https://doi.org/10.1109/DICTA.2009.19","url":null,"abstract":"The relationship between multiple cameras viewing the same scene may be discovered automatically by finding corresponding points in the two views and then solving for the camera geometry. In camera networks with sparsely placed cameras, low resolution cameras or in scenes with few distinguishable features it may be difficult to find a sufficient number of reliable correspondences from which to compute geometry. This paper presents a method for extracting a larger number of correspondences from an initial set of putative correspondences without any knowledge of the scene or camera geometry. The method may be used to increase the number of correspondences and make geometry computations possible in cases where existing methods have produced insufficient correspondences.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117068660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Handling Significant Scale Difference for Object Retrieval in a Supermarket","authors":"Yuhang Zhang, Lei Wang, R. Hartley, Hongdong Li","doi":"10.1109/DICTA.2009.79","DOIUrl":"https://doi.org/10.1109/DICTA.2009.79","url":null,"abstract":"We propose an object retrieval application which can retrieve user specified objects from a big supermarket. Significant and unpredictable scale difference between the query and the database image is the major obstacle encountered. The widely used local invariant features show their deficiency in such an occasion. To improve the situation, we first design a new weighting scheme which can assess the repeatability of local features against scale variance. Also, another method which deals with scale difference through retrieving a query under multiple scales is also developed. Our methods have been tested on a real image database collected from a local supermarket and outperform the existing local invariant feature based image retrieval approaches. A new spatial check method is also briefly discussed.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"611 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123322043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Image Context Using Object Centered Grid","authors":"S. N. Parizi, I. Laptev, Alireza Tavakoli Targhi","doi":"10.1109/DICTA.2009.80","DOIUrl":"https://doi.org/10.1109/DICTA.2009.80","url":null,"abstract":"Context plays a valuable role in any image understanding task confirmed by numerous studies which have shown the importance of contextual information in computer vision tasks, like object detection, scene classification and image retrieval. Studies of human perception on the tasks of scene classification and visual search have shown that human visual system makes extensive use of contextual information as postprocessing in order to index objects. Several recent computer vision approaches use contextual information to improve object recognition performance. They mainly use global information of the whole image by dividing the image into several predefined subregions, so called fixed grid. In this paper we propose an alternative approach to retrieval of contextual information, by customizing the location of the grid based on salient objects in the image. We claim this approach to result in more informative contextual features compared to the fixed grid based strategy. To compare our results with the most relevant and recent papers, we use PASCAL 2007 data set. Our experimental results show an improvement in terms of Mean Average Precision.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122995682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning the Optimal Transformation of Salient Features for Image Classification","authors":"J. Zhou, Zhouyu Fu, A. Robles-Kelly","doi":"10.1109/DICTA.2009.28","DOIUrl":"https://doi.org/10.1109/DICTA.2009.28","url":null,"abstract":"In this paper, we address the problem of recovering an optimal salient image descriptor transformation for image classification. Our method involves two steps. Firstly, a binary salient map is generated to specify the regions of interest for subsequent image feature extraction. To this end, an optimal cut-off value is recovered by maximising Fisher’s linear discriminant separability measure so as to separate the salient regions from the background of the scene. Next, image descriptors are extracted in the foreground region in order to be optimally transformed. The descriptor optimisation problem is cast in a regularised risk minimisation setting, in which the aim of computation is to recover the optimal transformation up to a cost function. The cost function is convex and can be solved using quadratic programming. The results on unsegmented Oxford Flowers database show that the proposed method can achieve classification performance that are comparable to those provided by alternatives elsewhere in the literature which employ pre-segmented images.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122937586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigations into the Robustness of Audio-Visual Gender Classification to Background Noise and Illumination Effects","authors":"D. Stewart, Hongbin Wang, Jiali Shen, P. Miller","doi":"10.1109/DICTA.2009.34","DOIUrl":"https://doi.org/10.1109/DICTA.2009.34","url":null,"abstract":"In this paper we investigate the robustness of a multimodal gender profiling system which uses face and voice modalities. We use support vector machines combined with principal component analysis features to model faces, and Gaussian mixture models with Mel Frequency Cepstral Coefficients to model voices. Our results show that these approaches perform well individually in ‘clean’ training and testing conditions but that their performance can deteriorate substantially in the presence of audio or image corruptions such as additive acoustic noise and differing image illumination conditions. However, our results also show that a straightforward combination of these modalities can provide a gender classifier which is robust when tested in the presence of corruption in either modality. We also show that in most of the tested conditions the multimodal system can automatically perform on a par with whichever single modality is currently the most reliable.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123969753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Simultaneous Computation of Motion Detection and Optical Flow for Object Tracking","authors":"S. Denman, C. Fookes, S. Sridharan","doi":"10.1109/DICTA.2009.35","DOIUrl":"https://doi.org/10.1109/DICTA.2009.35","url":null,"abstract":"Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Straight-Edge Extraction in Distorted Images Using Gradient Correction","authors":"M. Islam, L. Kitchen","doi":"10.1109/DICTA.2009.86","DOIUrl":"https://doi.org/10.1109/DICTA.2009.86","url":null,"abstract":"Many camera lenses, particularly low-cost or wide-angle lenses, can cause significant image distortion. This means that features extracted naively from such images will be incorrect. A traditional approach to dealing with this problem is to digitally rectify the image to correct the distortion, and then to apply computer vision processing to the corrected image. However, this is relatively expensive computationally, and can introduce additional interpolation errors. We propose instead to apply processing directly to the distorted image from the camera, modifying whatever algorithm is used to correct for the distortion during processing, without a separate rectification pass. In this paper we demonstrate the effectiveness of this approach using the particular classic problem of gradient-based extraction of straight edges. We propose a modification of the Burns line extractor that works on a distorted image by correcting the gradients on the fly using the chain rule, and correcting the pixel positions during the line-fitting stage. Experimental results on both real and synthetic images under varying distortion and noise show that our gradient-correction technique can obtain approximately a 50% reduction in computation time for straight-edge extraction, with a modest improvement in accuracy under most conditions.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116364324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}