{"title":"An approach to improve test path generation: Inclination towards automated model-based software design and testing","authors":"P. Kaur, A. K. Luhach","doi":"10.1109/ICRITO.2016.7784944","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784944","url":null,"abstract":"Model based Testing in software engineering is gaining widespread importance due to faster and automatic generation of test suites for validating software systems. Generation of tests is done from the analysis and design phase artefacts of requirements and specifications. Due to rapid growth in the software industry there comes an urgency of developing a full fledged, fully-tested and defect free systems at earliest possibility. Testing using Models is a novel approach utilizing the key concepts of black-box testing. While substantial part of testing process relies on the appropriateness and completeness of the Model to be used, an approach for designing/modelling a system under test (SUT) with more accuracy is discussed. It can then be combined to the traditional methods of Model-based testing to fill the loopholes and gaps which emerge during design and testing. The approach describes the functionality of the system in a much better way. In this approach we can view the front end as well as structure of the system together. The approach will prove to be promising in improving testing and quality of test cases.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128108509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayushi Jain, Garima Ahuja, Anuranjana, D. Mehrotra
{"title":"Data mining approach to analyse the road accidents in India","authors":"Ayushi Jain, Garima Ahuja, Anuranjana, D. Mehrotra","doi":"10.1109/ICRITO.2016.7784948","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784948","url":null,"abstract":"Despite all that has been done to promote Road Safety in India so far, there are always regions that fall prey to the vulnerabilities that linger on in every corner. The heterogeneity of these vulnerability-inducing causes leads to the need for an effective analysis so as to subdue the alarming figures by a significant amount. The objective of this paper is to have data mining to come to aid to create a model that not only smooths out the heterogeneity of the data by grouping similar objects together to find the accident prone areas in the country with respect to different accident-factors but also helps determine the association between these factors and casualties.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117293995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic reliability of pipelines with surface cracks","authors":"S. Glushkov, Y. Skvortsov, S. Perov","doi":"10.1109/ICRITO.2016.7784942","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784942","url":null,"abstract":"Proposed is procedure of evaluating reliability of pipelines containing crack-like defects, in stochastic setting and under consideration of the loading parameters spatial-temporal variation as well as the structure material crack resistance specifications scatter. Its basis is formed by the interpolation polynomials and Monte-Carlo methods designed for solving the statistical dynamics tasks. The procedure is brought into effect in the form of a software representing Windows-application.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114317860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rating based mechanism to contrast abnormal posts on movies reviews using MapReduce paradigm","authors":"P. Gupta, Atul Sharma, J. Grover","doi":"10.1109/ICRITO.2016.7784962","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784962","url":null,"abstract":"BigData contains large amount of unstructured data in the form of movie data, facebook data, and industry data and so on. There are number of posts are posted on Twitter about movies by different users. Out of these posts some of posts may be inappropriate. These posts contain negative comments as well as positive comments about movies. It is difficult to distinguish large number of positive and negative posts. To overcome this kind of problem we proposed a rating based mechanism that distinguishes abnormal posts with the help of users rating. If rating is positive then post is normal otherwise it is abnormal. To implement proposed mechanism we used hadoop platform and MapReduce paradigm.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel clustering technique for short texts","authors":"Neetu Singh, Narendra S. Chaudhari","doi":"10.1109/ICRITO.2016.7784956","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784956","url":null,"abstract":"We describe a novel clustering technique for clustering short texts, such as URLs, without enriching it with the help of external knowledge sources. Our technique first performs feature clustering to identify the key features of the dataset and then reconstructs the dataset on the basis of the key features. Then, it computes the similarity of the short texts belonging to the reconstructed dataset using similarity measures such as Jaccard, Cosine and Dice measures. Finally, it performs short text clustering using Spectral Clustering. We compare our method with conventional Spectral Clustering method which runs directly on the original short text dataset. We performed experiments on a subset of ODP dataset as well as WebKB dataset. The empirical results demonstrate an improvement of 21% in terms of accuracy over the Spectral Clustering method for ODP dataset and 29.2% for the WebKB dataset.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126889686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chatterjee, Bhagyashree Chaudhuri, C. Bhar, Ankur Shukla
{"title":"Modeling and analysis of reliability and optimal release policy of software with testing domain coverage efficiency","authors":"S. Chatterjee, Bhagyashree Chaudhuri, C. Bhar, Ankur Shukla","doi":"10.1109/ICRITO.2016.7784932","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784932","url":null,"abstract":"In order to verify whether the functions have been rightly implemented into the software, the developer is involved with the task of developing a set of test cases thereby influencing the set of functions and modules present in the software so that the developer can judge upon the defects laid into the implemented functions. The set of functions which these test cases influence comprises what in common terms is known as testing domain and that the rate of increase of the testing domain is vulnerable to the progress in testing process. The growth rate of the testing domain goes hand in hand with the fault content of the software, that is to say, as the testing domain spreads more number of existing faults are detected and subjected to removal, thereby causing the software fault content to decrease. Further the software developer is endowed with the task of determining the appropriate releasing time of the software into the market such that the cost is minimized and the reliability is maximized in the bargain. In order to counter effect the growing software fault content, this paper includes a testing domain dependent software reliability growth model (SRGM) incorporating the ideas of testing domain coverage efficiency as also the fault removal efficiency. The model so framed is based on Non-Homogenous Poisson Process (NHPP) and its validity has been verified by testing on fault data observed in earlier actual software development process. Along with illustration to the optimal time of software release the model has been subjected to goodness of fit comparison justifying that the developed model outperforms some of the existing ones in its capability of measuring the reliability.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130243033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stall estimation metric: An architectural metric for estimating software complexity","authors":"Amit R. Pandey","doi":"10.1109/ICRITO.2016.7784987","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784987","url":null,"abstract":"Software metrics can be classified in to two categories of code metrics and architectural metrics [1,2]. Code metrics basically consider analysis of data structures and algorithms for determining the complexity of the program [3,4,5]. Whereas architectural metrics consider the mechanism how system is processing data within its components and any existing dependencies between the processed data for estimating the complexity [3,6]. In any pipelined RISC processor program is executed instruction after instruction and it is also possible to have program dependencies between them [7]. These dependencies are of two types, data dependencies and control dependencies [8,9,10,11]. Data dependencies can be resolved by forwarding the data between stages of the pipelined RISC processor. Stall can be induced between the instructions while resolving some of these data dependencies. Stall can also be induced between instructions during branch prediction. The proposed architectural metric considers all those cases which will affect the overall execution of the program by causing stall together with the count of statements actually executed, for estimating the overall software complexity.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data integration based approach to find shortest path within a city for different time periods","authors":"Sudipta Kanjilal, S. Sen","doi":"10.1109/ICRITO.2016.7784957","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784957","url":null,"abstract":"Traditional data mining usually deals with data from single domain, however in modern day business applications the data may come from different domain for single application. Data sets are of multiple modalities. Each of them may have a different representation, distribution, scale and density. Data Integration techniques aggregate several domains of data sets and then represent in a unified form so that they can be used for data mining. In this research work data integration is considered in terms of the data collected from multiple modes of communication within a city and these data have been integrated and organized to infer more information. Here more information refers to generation of more numbers of routes in a city by combining multiple modes of communication. Further, this research work proposes an optimization in terms of searching path which is shortest in terms of time. The proposed methodology is executed on the historical data of different instances of day as well as different day of the week so that the relevant conditions or constraints associated with the traffic are included.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122676947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayank V. Bhatt, Shabnam Sharma, A. K. Luhach, Aditya Prakash
{"title":"Nature inspired route optimization in vehicular adhoc network","authors":"Mayank V. Bhatt, Shabnam Sharma, A. K. Luhach, Aditya Prakash","doi":"10.1109/ICRITO.2016.7784997","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784997","url":null,"abstract":"Recent advancements introduced in the field of wireless technologies have led to the emergence of vehicular ad hoc networks (VANETs). VANET consists of vehicles and road-side units as its components. These components communicate with each other to share the information, mainly related to traffic conditions. In such networks, routing, secure transmission of control information and user messages, avoiding traffic collisions and frequent change of topology are the main issues that arise. Therefore, offering an efficient algorithm for avoiding traffic collision is crucial to the deployment of vehicular ad hoc networks. This work deals with finding an optimized route to reach the destination while avoiding traffic collisions, using Meta heuristic optimization approach, namely Bat Algorithm. The proposed work has three modules: prediction of destination location, formation of region (by excluding invalid nodes) and finally the selection of optimized route. This work can be implemented in those application areas, where the purpose is to track the position of objects or nodes. Finally, the results are compared with standard Bat algorithm on the basis of number of iterations, number of nodes and total travelling time to reach the destination.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124281950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved invisible watermarking technique using IWT-DCT","authors":"S. Agrwal, A. Yadav, Umesh Kumar, S. Gupta","doi":"10.1109/ICRITO.2016.7784966","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784966","url":null,"abstract":"Invisible Image Watermarking is secret embedding scheme for hiding of secret image into cover image file and the purpose of invisible watermarking is copyrights protection. Image Watermarking have research challenges for increasing robustness against visual attacks and statistical attacks. Wavelet transformation based Image Watermarking techniques provide better robustness for statistical attacks in comparison to Discrete Cosine Transform domain and Spatial Domain based Image Watermarking. The combined technique of DWT and DCT provides advantages of both techniques. The DWT have disadvantage of fraction loss in embedding which increases mean square error and results decreasing PSNR. Robustness of watermarking technique is proportional to PSNR. The Proposed algorithm presents Hybrid Integer wavelet transform and discrete cosine transform based watermarking technique to obtain increased imperceptibility and robustness compared to DWT+DCT based watermarking technique. The proposed combined IWT+DCT based watermarking technique reduces the fractional loss compared to DWT based watermarking.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117089363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}