{"title":"A robust background subtraction algorithm for motion based video scene segmentation in embedded platforms","authors":"Muhammad Haris Khan, I. Kypraios, U. Khan","doi":"10.1145/1838002.1838037","DOIUrl":"https://doi.org/10.1145/1838002.1838037","url":null,"abstract":"Recent work on wavelets applied to images or a video sequence has been exploited for extracting robust illumination invariant features. The paper presents robust background subtraction algorithm to segment motion based video scene in embedded platforms. Every machine or computer vision algorithm to be useful should be able to separate the different background and foreground information (e.g. objects) in the given scene. Therefore, it is essential to the success of any real time algorithm, the scene segmentation invariant to lighting conditions. We designed two main algorithms; Six frames (6-Frames) and Time Interval with Memory (TIME) to segment the video scene robustly based on motion detection in embedded platforms. The former uses the first six frames and the latter samples the frames at regular intervals of time with memory to generate a background reference frame. Our algorithms used bandpass video scene filtering with wavelets for extracting illumination invariant scene features and then combine them efficiently into the background reference frame. Hardware efficient image stabilization capability was added to remove the unwanted motion due to camera movement. The algorithms were tested using three moving bee videos sequences; static background, moving shadow and destabilized. Performance of algorithms was evaluated on the basis of number of frames in which the moving target was detected for each video sequence.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"33 5-6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling left ventricle shape from 2D CT images using wavelets and mean regular hexagon","authors":"T. Ali, P. Akhtar, M. I. Bhatti","doi":"10.1145/1838002.1838090","DOIUrl":"https://doi.org/10.1145/1838002.1838090","url":null,"abstract":"We present modeling of left ventricle (LV) shape from 2D CT images, acquired from a 64-slice CT medical imaging modality, using wavelets, regular hexagonal boundary tracing, and simple statistical moments. Our proposed algorithm uses a regular hexagonal approximation model corresponding to interactive labeling of LV segmented texture using wavelet-based techniques. We generate LV shape model, as the mean regular hexagonal approximation. We demonstrate application and usefulness of our shape model for successful identification of end-diastolic and end-systolic stages of a normal human cardiac cycle.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128022118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the numerical approximation of drug diffusion in complex cell geometry","authors":"Q. A. Chaudhry, M. Hanke, R. Morgenstern","doi":"10.1145/1838002.1838021","DOIUrl":"https://doi.org/10.1145/1838002.1838021","url":null,"abstract":"The mathematical modeling of a mammalian cell is a very tedious work due to its very complex geometry. Especially, taking into account the spatial distribution and the inclusion of lipophilic toxic compounds greatly increases its complexity. The non-homogeneity and the different cellular architecture of the cell certainly affect the diffusion of these compounds. The complexity of the whole system can be reduced by a homogenization technique. To see the effect of these compounds on different cell architectures, we have implemented a mathematical model. The work has been done in 2-dimensional space. The simulation results have been qualitatively verified using compartmental modeling approach. This work can be extended with a more complex reaction-diffusion system and to 3-dimensional space as well.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130581051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahbaz Khan, Sanaullah Khan, M. Nauman, T. Ali, Masoom Alam
{"title":"Realizing dynamic behavior attestation for mobile platforms","authors":"Shahbaz Khan, Sanaullah Khan, M. Nauman, T. Ali, Masoom Alam","doi":"10.1145/1838002.1838008","DOIUrl":"https://doi.org/10.1145/1838002.1838008","url":null,"abstract":"Modern mobile devices serve as platforms that consume services from multiple service providers. It is vital for such an open cell phone environment to secure the information flows of the stakeholders on the platform. Recent emergence of trusted computing technologies provides a root of trust in hardware, which can be used to construct a chain of trust. This chain of trust can be used to remotely verify that the platform is capable to manage information flows in a trusted manner. This work highlights how trusted computing technologies can be complemented with existing Mandatory Access Control mechanisms to verify the runtime and dynamic behaviors of a platform by using a high level, managerial policy - hence enabling a trustworthy platform with dynamic behavior management.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131805361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lungs segmentation by developing binary mask","authors":"Saleem Iqbal, A. Dar","doi":"10.1145/1838002.1838088","DOIUrl":"https://doi.org/10.1145/1838002.1838088","url":null,"abstract":"Lungs Segmentation from chest CT slices is a precursor for CAD applications. Most of the lungs segmentation methods are scanner dependent. We propose a fully automated machine independent method for segmenting lungs from CT images. The algorithm comprised of three main steps. In the first step, gray level threshold value has been selected by maximizing within class similarity. In the second step, binary mask has been developed using selected gray level threshold value and improved by morphological operations. In the third step, lungs have been segmented utilizing binary mask and original CT slice images. The method has been tested on data set of 25 slices collected from two different sources. Results have been compared with manually delineated lungs on CT images by a radiologist. Mean overlapping fraction, precision, sensitivity/recall, specificity, accuracy and F-measure have been recorded as 0.9929, 0.9962, 0.9966, 0.9997, 0.9995 and 0.9964 respectively.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124394328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Functional unit level parallelism in RISC architecture","authors":"Ajmal Khan, Muhammad Saqib, Z. Kaleem","doi":"10.1145/1838002.1838083","DOIUrl":"https://doi.org/10.1145/1838002.1838083","url":null,"abstract":"This paper presents the design and implementation of RISC processor having five stages pipelined architecture. Functional unit parallelism is exploited through the implementation of pipelining in five stages of RISC processor. The hazards which come to life due to parallelism are data, structural, and control hazards. In order to achieve the true benefits of the parallelism through pipelining; these hazards must be properly handled. The data hazards are solved using bypassing in which we forward the required value of the operand to the succeeding instruction. Structural hazards are solved by implementing three port register file so that two operand reading and one register writing can be performed in parallel without degrading the performance. Control hazards arise from Branch, Jump and Call instructions. To solve these problems, we insert automated NOP in stage2, stage3 and stage4. The processor designed is a fully functional processor which can execute any program including jump statements, switch statements, loops and subroutines which are the basic ingredients of any computer program.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115139602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of process improvement on software development predictions, for measuring software development project's performance benefits","authors":"Syeda Umema Hani","doi":"10.1145/1838002.1838064","DOIUrl":"https://doi.org/10.1145/1838002.1838064","url":null,"abstract":"This Paper describes PhD software engineering work on \"Impact of Process Improvement on Software Development Predictions, for Measuring Software Development Project's Performance Benefits\". This research is being conducted in order to develop a model that could predict impact of CMMI Process Improvement on Performance Classes- Productivity Parameters such as (Effort, Cost, Schedule and Productivity) and to give mechanism/framework for generating reports on the perceived performance benefits of Software Performance classes. To assist software managers in performing cost benefits analysis for CMMI initiatives.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced text steganography by changing word's spelling","authors":"K. F. Rafat","doi":"10.1145/1838002.1838082","DOIUrl":"https://doi.org/10.1145/1838002.1838082","url":null,"abstract":"Digitization of electronic signals has brought a stir in the telecommunication field. From copper wire to fiber optics, technology has gone through a mammoth change rendering a variety of communication media available for use by people, at the cheapest of rates possible, to communicate in a nearly perfect error-free environment. Consumers can now freely exchange or distribute digital contents at any time and anywhere in the world. The situation where a perfect replica of digital content can easily be made has raised serious security concerns prompting for the protection of rights of intellectual ownership and apprehension of unauthorized tampering of digital information for mala fide usage. On the other hand, the way multimedia contents are being tempered with has also made it an important subject of technological research.\u0000 This paper suggests an enhanced method for secretly exchanging real time or off line information by using differently spelled words in British and American Language. The concept has been implemented via software applications developed in Visual Basic 6 programming language.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121779352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ontology based approach to automating data integration in scientific workflows","authors":"M. Rehman, S. Jablonski, Bernhard Volz","doi":"10.1145/1838002.1838052","DOIUrl":"https://doi.org/10.1145/1838002.1838052","url":null,"abstract":"Due to the proliferation of data generating devices such as sensors in scientific applications, data integration has become most challenging task since the data stemming from these devices are extremely heterogeneous in terms of structure (schema) and semantics (interpretation). In practice, integration and transformation is typically performed by the scientists manually; in fact extensive efforts are required. The approaches for automating data integration task as much as possible are badly needed. DaltOn is a generic framework that offers various functionalities for managing the data in scientific applications. In this paper, we present DaltOn's functionality for automating data integration task based on exploitation of ontologies. In addition, we also elaborate the specific module of our framework which is responsible for implementing the functionality. At last, we also present core algorithms that demonstrate a good evaluation of our approach.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130210427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparative analysis of LDPC decoders for image transmission over AWGN channel","authors":"S. Qazi, M. Shoaib, U. Javaid, Shahzad Asif","doi":"10.1145/1838002.1838007","DOIUrl":"https://doi.org/10.1145/1838002.1838007","url":null,"abstract":"In this paper we studied the performance of Low Density Parity Check (LDPC) codes over Additive White Gaussian Noise (AWGN) channel. Different images are transmitted for two different LDPC decoders. Parity check matrices are used for encoding and decoding processes. The basic technique used to decode the message is message passing algorithm. We have used the image of 256x256 for our experiments. Minimum error rate is achieved at a code rate of 0.5 and row weight of 4. Different images are recovered at 10 dB SNR with a probability of error close to Shannon limit.","PeriodicalId":434420,"journal":{"name":"International Conference on Frontiers of Information Technology","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116161259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}