{"title":"Adaptive Gray Level Difference to Speed Up Fractal Image Compression","authors":"V. R. Prasad, Vaddella, R. Babu, Inampudi","doi":"10.1109/ICSCN.2007.350741","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350741","url":null,"abstract":"Fractal image compression is a lossy compression technique that has been developed in the early 1990s. It makes use of the local self similarity property existing in an image and finds a contractive mapping affine transformation (fractal transform) T, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. The other advantage is its multiresolution property, i.e. an image can be decoded at higher or lower resolutions than the original without much degradation in quality. However, the encoding time is computationally intensive. In this paper, a new method to reduce the encoding time based on computing the gray level difference of domain and range blocks, is presented. A comparison for best match is performed between the domain and range blocks only if the range block gray level difference is less than the domain block gray level difference. This reduces the number of comparisons, and thereby the encoding time considerably, while obtaining good fidelity and compression ratio for the decoded image. Experimental results on standard gray scale images (512times512, 8 bit) show that the proposed method yields superior performance over conventional fractal encoding","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115902107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Location Tracking Algorithm for MANET using Directional Antennas","authors":"K. Kathiravan, S. Thamarai Selvi","doi":"10.1109/ICSCN.2007.350718","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350718","url":null,"abstract":"In order to implement effective directional medium access control (DMAC) protocol and routing protocol in mobile ad hoc networks (MANET), each node in the network should know how to set its transmission direction to transmit a packet to its neighbors. So, it becomes imperative to have a mechanism at each node to track the locations of its neighbors under mobility condition. In this paper, an efficient location tracking algorithm for MANET using directional antennas with fixed beams is proposed to maintain communication between two communicating nodes. The proposed algorithm works in conjunction with the DMAC protocol. The transmitting node will monitor its received power level from the neighboring node's acknowledgement packet transmission. If the received power falls below the threshold, the node indicates the need to switch the antenna element to its data link layer. The DMAC protocol in the data link layer will switch the transmitting antenna either in the clockwise or anticlockwise direction of the active antenna and check for the induced power. Whichever antenna has picked up the signal with sufficient signal strength, handoff takes place from the active antenna to the new antenna. We have shown through simulations, that the mechanism is suitable in highly mobile scenarios. The throughput falls regularly as nodes lose their connection without location tracking. But, the throughput remains a constant or there is a slight dip at the switching time interval when the location tracking is implemented with DMAC","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Fast Convergence Adaptive Algorithm","authors":"P. Palanisamy, N. Kalyanasundaram","doi":"10.1109/ICSCN.2007.350719","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350719","url":null,"abstract":"In this paper, a new fast convergence adaptive algorithm with variable step size is proposed for FIR adaptive filter. This new proposed algorithm is derived based on the quasi-Newton family. Simulation results are presented to compare the convergence of the proposed algorithm with least mean square (LMS) algorithm and RLS algorithm. It shows that the proposed new algorithm has comparable convergence speed to the other known adaptive algorithms","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117316952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Service Time Error Based Scheduling Algorithm for a Computational Grid","authors":"D. Lopez, Rasika Chakravarthy","doi":"10.1109/ICSCN.2007.350663","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350663","url":null,"abstract":"Grids enhance computation speed and data storage. Scheduling algorithms at the operating system level do not consider the fairness factor. We propose that a fairness algorithm should be used for scheduling and we also propose an algorithm for effective scheduling of jobs by the local scheduler. It is ideal to use algorithms like weighted round robin, weighted fair queuing or virtual time round robin to achieve proportional fairness. The algorithm that we have developed is based on the service time error. We maintain a good rate of accuracy and low overhead","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114211611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FPGA Implementation of Parallel Pipelined Multiplier Less FFT Architecture Based System-On-Chip Design Targetting Multimedia Applications","authors":"B. Sreejaa, T. Jayanthy, E. Logashanmugam","doi":"10.1109/ICSCN.2007.350677","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350677","url":null,"abstract":"This paper proposes a novel SoC design based on parallel-pipelined multiplier less FFT architecture targeting multimedia applications. The proposed architecture has the advantages of less complexity, more speed, high throughput, and low cost and high power efficiency. This demands use of system level design methodologies from behavior level to fabrication level like software and hardware co-design, use of intellectual properties, reusability from netlist, co-design and verification. This architecture is compatible for both video processing and audio processing including video compression. This paper deals with various dimensions of the designing and implementation of a SoC using reuse concept","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modified Conservative Staircase Scheme for Video Services","authors":"H. Om, S. Chand","doi":"10.1109/ICSCN.2007.350681","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350681","url":null,"abstract":"The staircase scheme is one of the important schemes as regards the buffer storage and disk transfer rate. However, it does not always provide the video data to the users in time. This drawback has been removed in the conservative staircase broadcasting scheme in which the video segments are downloaded and stored at the client's site in their entirety before they are required for viewing. It does remove the non-delivery of the video data to the users in time, but requires more resources. In this paper, the modified conservative staircase scheme is proposed, which requires less resources and does not suffer from the problem of non-delivery of video data in time. We measure the performance parameters in terms of their maximum values, i.e., for the worst case scenario, whereas in the conservative staircase scheme, they have been discussed with respect to their average values in which case the user services may not always be provide. Furthermore, the proposed scheme performs better than the conservative staircase scheme","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114716384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Sumalatha, V. Vaidehi, A. Kannan, M. Rajasekar, M. Karthigaiselvan
{"title":"Hash Mapping Strategy for Improving Retrieval Effectiveness in Semantic Cache System","authors":"M. Sumalatha, V. Vaidehi, A. Kannan, M. Rajasekar, M. Karthigaiselvan","doi":"10.1109/ICSCN.2007.350737","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350737","url":null,"abstract":"The emergence of Web applications has encouraged us to have much recent research on data caching. In our proposed work we have given a new strategy, dynamic hash mapping technique which gives fast information retrieval semantically with the cache. This increases the speed up in the information retrieval for semantic caching which is a method of data caching where it is rule based. The semantic caching technology can help to improve the efficiency of XML query processing in the Web environment. Unlike from the traditional tuple or page-based caching systems, semantic caching systems exploit the idea of reusing cached query results to answer new queries based on the query containment and rewriting techniques. The primary contribution of this article is to revisit the performance of semantic caching by this new mapping technique, which increases the efficiency in information retrieval in the semantic caching system. Our proposed method gives enhanced performance by reduced network traffic and searches the exact information from a large database, which is crucial, in a range of applications, especially in network-constrained environments","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation of One Dimensional Systolic Array for FFT Processor","authors":"A. Nandi, S. Patil","doi":"10.1109/ICSCN.2007.350724","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350724","url":null,"abstract":"A new approach for the systolic implementation of FFT algorithms is presented, the proposed approach is based on the fundamental principle of 1-dimensional DFT can be decomposed efficiently with less number of twiddle values and also the computation burden involved with multipliers is reduced considerably, the FFT can be computed efficiently with 1-D systolic array, the essence of 1D systolic array is to have efficient computation with less twiddles, the proposed systolic array does not require any preloading of input data and it produces output data at boundary PES. No networks for intermediate spectrum transposition between constituent I-dimensional transforms are required: therefore the entire processing is fully pipelined. This approach also has significant advantages over existing architectures in reduced complexity with Wallace tree adder and Booth multiplier","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124391525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On-Board Verification of FPGA Based Digital Systems using NIOS Processor (A Methodology Without Hook-Ups and I/O Cards)","authors":"G. Lakshminarayanan, T. Prabakar","doi":"10.1109/ICSCN.2007.350678","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350678","url":null,"abstract":"A novel methodology for testing all digital systems fused onto the FPGA has been developed in this paper. This methodology does not require any hook-up and input/output (I/O) interfacing card. This methodology uses the NIOS processor core to configure system onto the FPGA. HDL code of the digital system along with NIOS core is downloaded onto the FPGA. The NIOS processor can be programmed, to supply all possible combinations of test vectors to the digital system and read back the results generated by the digital system. The results are compared with the expected results on the NIOS processor and the errors displayed. After studying the error, the HDL code is tuned and the process is repeated till getting zero error. Once the process is completed, either the HDL code of the tested digital system or macro of the tested digital system can be a physically proven digital system. The advantage of this methodology is that, high throughput of the digital systems can be verified; by considering that NIOS II frequency is adequate to supply the test vectors. By the way, the time required to verify the digital system with all possible combinations of test vectors is greatly reduced","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131619784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Kezi Selva Vijilal, P. Kanagasabapathy, Stanly Johnson Jeyaraj, V. Ewards
{"title":"Artifacts Removal in EEG Signal using Adaptive Neuro Fuzzy Inference System","authors":"C. Kezi Selva Vijilal, P. Kanagasabapathy, Stanly Johnson Jeyaraj, V. Ewards","doi":"10.1109/ICSCN.2007.350676","DOIUrl":"https://doi.org/10.1109/ICSCN.2007.350676","url":null,"abstract":"In this paper, we propose a hybrid soft computing technique called adaptive neuro-fuzzy inference system (ANFIS) to estimate the interference and to separate the electroencephalogram (EEG) signal from its electrooculogram (EOG), electrocardiogram (ECG) and electromyogram (EMG) artifacts. This paper shows that the proposed method successfully removes the artifacts and extracts the desired EEG signal","PeriodicalId":257948,"journal":{"name":"2007 International Conference on Signal Processing, Communications and Networking","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128347016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}