{"title":"ByteFreq: Malware clustering using byte frequency","authors":"Nirmal Singh, S. S. Khurmi","doi":"10.1109/ICRITO.2016.7784976","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784976","url":null,"abstract":"Increased number of malware samples have created many challenges for Antivirus companies. One of these challenges is clustering the large number of malware samples they receive daily. Malware authors use malware generation kits to create different instances of the same malware. So most of these malicious samples are polymorphic instances of previously known malware family only. Clustering these large number of samples rapidly and accurately without spending much time on processing the sample have become a critical requirement. In this paper we proposed, implemented and evaluated a method, called ByteFreq that can cluster large number of samples using byte frequency. Byte frequency is represented as time series and SAX (Symbolic Aggregation approXimation)[1] is used to convert the time series in symbolic representation. We evaluated proposed system on real world malware samples and achieved 0.92 precision and 0.96 recall accuracy.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117220000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Case based organizational memory for processing architecture based on measurement metadata","authors":"Maria de los Ángeles Martín, M. Diván","doi":"10.1109/ICRITO.2016.7784954","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784954","url":null,"abstract":"With the aim to manage and retrieve the organizational knowledge, in the last years numerous proposals of models and tools for knowledge management and knowledge representation have arisen. However, most of them store knowledge in a non-structured or semi-structured way, hindering the semantic and automatic processing of this knowledge. In this paper we present a summary of an case-based organizational memory ontology, which aims at contributing to the design of an organizational memory based on cases, so that it can be used to learn, reasoning, solve problems, and as support to better decision making as well. The objective of this Organizational Memory is to serve as base for the organizational knowledge exchange in a processing architecture specialized in the measurement and evaluation. One key aspect associated with the measurement process is that the measures must be consistent and comparable in any moment for making decisions properly. In this way, the processing architecture is based on the C-INCAMI framework (Context-Information Need, Concept model, Attribute, Metric and Indicator) to define the measurement projects. Additionally, the proposal architecture uses a big data repository to make available the data for consumption and to manage the Organizational Memory, which allows a feedback mechanism in relation with online processing. The relation between the data stream processing, the big data repository and the Organizational Memory will be shown. In order to illustrate its utility a practical case associated with the weather radar (WR) of the Experimental Agricultural Station (EAS) INTA Anguil (La Pampa State, Argentina) is shown. Also future trends and concluding remarks are outlined.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134087100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security issues in Wireless Sensor Network — A review","authors":"J. Grover, Shikha Sharma","doi":"10.1109/ICRITO.2016.7784988","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784988","url":null,"abstract":"Wireless Sensor Networks (WSNs) are formed by deploying as large number of sensor nodes in an area for the surveillance of generally remote locations. A typical sensor node is made up of different components to perform the task of sensing, processing and transmitting data. WSNs are used for many applications in diverse forms from indoor deployment to outdoor deployment. The basic requirement of every application is to use the secured network. Providing security to the sensor network is a very challenging issue along with saving its energy. Many security threats may affect the functioning of these networks. WSNs must be secured to keep an attacker from hindering the delivery of sensor information and from forging sensor information as these networks are build for remote surveillance and unauthorized changes in the sensed data may lead to wrong information to the decision makers. This paper studies the various security issues and security threats in WSNs. Also, gives brief description of some of the protocols used to achieve security in the network. This paper also compares the proposed methodologies analytically and demonstrates the findings in a table. These findings can be used further by other researchers or Network implementers for making the WSN secure by choosing the best security mechanism.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132429245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using sentimental analysis in prediction of stock market investment","authors":"S. Khatri, Ayush Srivastava","doi":"10.1109/ICRITO.2016.7785019","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7785019","url":null,"abstract":"Sentimental Analysis is one of the most popular technique which is widely been used in every industry. Extraction of sentiments from user's comments is used in detecting the user view for a particular company. Sentimental Analysis can help in predicting the mood of people which affects the stock prices and thus can help in prediction of actual prices. In this paper sentimental analysis is performed on the data extracted from Twitter and Stock Twits. The data is analyzed to compute the mood of user's comment. These comments are categorized into four category which are happy, up, down and rejected. The polarity index along with market data is supplied to an artificial neural network to predict the results.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131964568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of text mining tools","authors":"Arvinder Kaur, Deepti Chopra","doi":"10.1109/ICRITO.2016.7784950","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784950","url":null,"abstract":"Most of the data is in the form of text these days. While databases store only structured data, most of the data is unstructured like text documents, web pages, emails etc. Text mining is what is required if useful information needs to be extracted from tons of text. But where to begin, what are the popular tools, which techniques are used, what are the features. Beginning is always the toughest, so this paper tries to explore the tools available for text mining to help new researchers and practitioners in the field of text mining.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125075531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metaheuristic based workflow scheduling in cloud environment","authors":"Sunil Kumar, S. Mittal, Manpreet Singh","doi":"10.1109/ICRITO.2016.7785017","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7785017","url":null,"abstract":"Workflow scheduling deals with the mapping of interdependent and compute intensives tasks to the system resources considering all application's requirements. Due to its elastic capabilities, the cloud has been instrumental in effective scheduling of workflow activities. This paper presents a genetic algorithm based metaheuristics to schedule workflow applications on cloud resources with an objective to improve both the makespan and resource utilization. The performance of proposed algorithm is tested for different workflow applications (Montage, Fork-Join, Epigenome) under various load conditions in a scalable environment.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121133866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use case point estimation technique in software development","authors":"S. Khatri, S. Malhotra, P. Johri","doi":"10.1109/ICRITO.2016.7784938","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784938","url":null,"abstract":"Software Projects are developed with the prior requirements and should be capable to complete on time under a fixed budget but it gets late to delivered, gets over-budget and even not able to meet user expectations. In agile approach, the estimation of software depends on expert opinion or on any historical data which is used as the input to previous methods like planning poker. The accuracy in estimation is the primary goal of any development but various factors related to environment and technical complexity which may further alleviate the size and effort of a project. Previously proposed estimation models were successful in estimation but lacks due to some obstacles such as less accuracy and customer satisfaction as per the requirement, other factors such as complexity, risk tracking and estimation. This paper emphasizes on a new algorithmic approach to estimate considering Environment and Technical factors so as to have a more accuracy with the use cases under agile development.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123176698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved approach for software defect prediction using artificial neural networks","authors":"Tanvi Sethi, Gagandeep","doi":"10.1109/ICRITO.2016.7785003","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7785003","url":null,"abstract":"Software defect prediction (SDP) is a most dynamic research area in software engineering. SDP is a process used to predict the deformities in the software. To identifying the defects before the arrival of item or aimed the software improvement, to make software dependable, defect prediction model is utilized. It is always desirable to predict the defects at early stages of life cycle. Hence to predict the defects before testing the SDP is done at end of each phase of SDLC. It helps to reduce the cost as well as time. To produce high quality software, the artificial neural network approach is applied to predict the defect. Nine metrics are applied to the multiple phases of SDLC and twenty genuine software projects are used. The software project data were collected from a team of organization and their responses were recorded in linguistic terms. For assessment of model the mean magnitude of relative error (MMRE) and balanced mean magnitude of relative error (BMMRE) measures are used. In this research work, the implementation of neural network based software defect prediction is compared with the results of fuzzy logic basic approach. In the proposed approach, it is found that the neural network based training model is providing better and effective results on multiple parameters.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115290786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Selection of distributed generation system using multicriteriadecision making fuzzy TOPSIS optimization","authors":"V. S. Galgali, G. Vaidya, M. Ramachandran","doi":"10.1109/ICRITO.2016.7784931","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784931","url":null,"abstract":"Distributed Generations (DG) are modular power generating technologies that are located near the load centers. DGs help avoid the expensive long distance transmission of power which is expensive and lossful while providing certain relief to the central power grid. They can be reliable and environmental friendly. DGs can prove beneficial for the local economies, enhance the energy independence, provides lowcost electricity at the same time providing access to liberalized generation markets for the concerned distribution utilities. There is wide range of DG options available. When presented with wide range of DG options selecting the correct one can prove to be daunting task. In this paper Multi-Criteria Decision Making (MCDM) technique which is increasingly popular is used for ranking DG systems. The DG systems considered in this paper are reciprocating engines (RE), micro turbine (MT), fuel cell (FC), solar PV (PV) and wind turbine (WT) while the criteria used for ranking are cost, minimum starting time, noise, emission level and continuity. The analysis is done with the aid of MCDM technique employing Fuzzy TOPSIS while weight values of the criteria are determined using the Interval Shannon's Entropy methodology. As per results obtained the first criterion in preference ranking of DG systems is PV, followed by FC, RE, MT and WT.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122904541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Klochkov, E. Klochkova, O. Antipova, E. Kiyatkina, I. Vasilieva, E. Knyazkina
{"title":"Model of database design in the conditions of limited resources","authors":"Y. Klochkov, E. Klochkova, O. Antipova, E. Kiyatkina, I. Vasilieva, E. Knyazkina","doi":"10.1109/ICRITO.2016.7784927","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784927","url":null,"abstract":"Modern databases are crucial components of quality management systems according to the requirements of ISO 9001:2015, but their design and maintenance need significant resources. Despite the fact that modern databases are a key factor of management decision making, not all organizations can spend sufficient resources on their development what results in the necessity to find a model for database maintenance in the conditions of limited resources.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125604883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}