{"title":"Toward development of a tool supporting a 2-layer divide & conquer approach to leads-to model checking","authors":"Yati Phyo, Canh Minh Do, K. Ogata","doi":"10.1109/AITC.2019.8920978","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920978","url":null,"abstract":"A 2-layer divide & conquer approach to leads-to model checking is one possible way to mitigate the state explosion in model checking by splitting the state space into two layers. It is necessary to collect all states located at some specific depth k to implement the approach. We describe a meta-program in Maude that takes a systems specification M and a natural number k, updates M such that its updated version M′ maintains depth information and collects all states located at depth k in M with M′.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123270743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software Defect Prediction using Hybrid Approach","authors":"Myo Thant, Nyein Thwet Thwet Aung","doi":"10.1109/AITC.2019.8921374","DOIUrl":"https://doi.org/10.1109/AITC.2019.8921374","url":null,"abstract":"Defective software modules have significant impact over software quality leading to system crashes and software running error. Thus, Software Defect Prediction (SDP) mechanisms become essential part to enhance quality assurance activities, to allocate effort and resources more efficiently. Various machine learning approaches have been proposed to remove fault and unnecessary data. However, the imbalance distribution of software defects still remains as challenging task and leads to loss accuracy for most SDP methods. To overcome it, this paper proposed a hybrid method, which combine Support Vector Machine (SVM)-Radial Basis Function (RBF) as base learner for Adaptive Boost, with the use of Minimum-Redundancy-Maximum-Relevance (MRMR) feature selection. Then, the comparative analysis applied based on 5 datasets from NASA Metrics Data Program. The experimental results showed that hybrid approach with MRMR give better accuracy compared to SVM single learner, which is effective to deal with the imbalance datasets because the proposed method have good generalization and better performance measures.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
War War Moe Myint Han, Yadanar Win, Nay Win Zaw, M. Htwe
{"title":"Minimization of Four Wave Mixing Effect in Long-Haul DWDM Optical Fiber Communication System","authors":"War War Moe Myint Han, Yadanar Win, Nay Win Zaw, M. Htwe","doi":"10.1109/AITC.2019.8920922","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920922","url":null,"abstract":"Four Wave Mixing (FWM) of nonlinear effect is a major damage and degradation of the signal quality in the DWDM system. In this research paper, the proposed DWDM system is 70 km fiber length at 10 Gbps bit rate with 3 channels and channel spacing is 100 GHz. By using increasing dispersion coefficient techniques and dispersion compensated fiber mitigate the impact of FWM. The performance of the system evaluated on Bit Error Rate (BER) and Quality factor (Q-factor) which are simulated on commercial Optisystem (version-16) software. The results showed the decreasing of BER value, which got the superior performances and enhanced signal quality with more bandwidth in long-haul optical fiber communication.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122954168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced Object Representation on Moving Objects Classification","authors":"Tin-Tin Yu, Z. Win","doi":"10.1109/AITC.2019.8920974","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920974","url":null,"abstract":"A feature representation approach is proposed for discriminative features extraction and this object representation tend to handle the large amount of local features in feature correspondence. Object representation with shape and color feature tends to certify the strength of proposed feature extraction method. In the proposed method, HOG are extracted on 300 corner points which are the strongest points on detected corners and these points are supposed as in one block to get the HOG vector. As a second portion of feature extraction, the moments on HSI are extracted on each separated channel. The proposed feature extraction method is tested intensively on the different sequences of the Online Benchmark Tracking dataset, CAVIAR Test Case Scenarios and Change Detection dataset (CDnet 2014) with the comparison of other related feature extraction methods. Classification of proposed approach receives 98.1%, 93.8%, 96.8%, 97.7% and 90.5% for walking, crossing, walk1, pedestrians and twopositionPTZCam respectively.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128063559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Policy-based Revolutionary Ciphertext-policy Attributes-based Encryption","authors":"Phyo Wah Wah Myint, Swe Zin Hlaing, Ei Chaw Htoon","doi":"10.1109/AITC.2019.8920924","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920924","url":null,"abstract":"Ciphertext-policy Attributes-based Encryption (CP-ABE) is an encouraging cryptographic mechanism. It behaves an access control mechanism for data security. A ciphertext and secret key of user are dependent upon attributes. As a nature of CP-ABE, the data owner defines access policy before encrypting plaintext by his right. Therefore, CP-ABE is suitable in a real environment. In CP-ABE, the revocation issue is demanding since each attribute is shared by many users. A policy-based revolutionary CP-ABE scheme is proposed in this paper. In the proposed scheme, revocation takes place in policy level because a policy consists of threshold attributes and each policy is identified as a unique identity number. Policy revocation means that the data owner updates his policy identity number for ciphertext whenever any attribute is changed in his policy. To be a flexible updating policy control, four types of updating policy levels are identified for the data owner. Authorized user gets a secret key from a trusted authority (TA). TA updates the secret key according to the policy updating level done by the data owner. This paper tests personal health records (PHRs) and analyzes execution times among conventional CP-ABE, other enhanced CP-ABE and the proposed scheme.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123162736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of Buildings in Remote Sensing Imagery with Deep Belief Network","authors":"Su Wai Tun, Khin Mo Mo Tun","doi":"10.1109/AITC.2019.8921039","DOIUrl":"https://doi.org/10.1109/AITC.2019.8921039","url":null,"abstract":"In land use analysis, the extraction of buildings from remote sensing imagery is an important problem. This work is difficult to obtain the spectral features from buildings due to high intra-class and low inter-class variation of buildings. In the paper, a patch-based deep belief network (PBDBN) architecture is used for the extraction of buildings from remote sensing datasets. And low-level building features (e.g compacted contours) of adjacent regions are combined with Deep Belief Network (DBN) features during the post-processing stage for obtaining better performance. The experimental results are demonstrated on Massachusetts buildings dataset to express the performance of PBDBN and it is compared with other method on the same dataset.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123502550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stock Trend Extraction using Rule-based and Syntactic Feature-based Relationships between Named Entities","authors":"Ei Thwe Khaing, M. Thein, M. Lwin","doi":"10.1109/AITC.2019.8920986","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920986","url":null,"abstract":"Many research topics still debate to predict the trends of a stock in the financial markets. Trend extraction is an important part of the information retrieved from the financial data sources, such as news articles or web pages. For trend extraction on text document, named entities are identified and relations between them are extracted. These trends are extracted from finding relationships between named entities related words for stock data. The relationships of entities in stock news articles have unstructured, time dependency, different word range and length without syntactic structure. Many previous researchers didn’t propose the trend extraction based on named entities and their relationships. This paper proposes rule-based and syntactic feature-based relation extraction method between named entities for the trend extraction. This proposed system extracts trends by finding the relationships between named entities for stock data. The experimental results extract trends from news using relationships within stock related named entities.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134450489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text Based Shuffling Algorithm in Digital Watermarking","authors":"Zayar Phyo, Ei Chaw Htoon","doi":"10.1109/AITC.2019.8921222","DOIUrl":"https://doi.org/10.1109/AITC.2019.8921222","url":null,"abstract":"With the increasing image file size from megabyte to gigabytes, sequential Fisher-Yates shuffling algorithm step became slower and needed huge amount of memories to allocate. When allocating large megapixels index to memory array at the run time of standard personal computer, it is corrupted and existed unexpectedly because of the memory overflow problem. So, Fisher-Yates shuffling algorithm is not suitable for loading massive pixels at small memory buffer size computer in new LSB-based Steganography method. In this paper, the Text based shuffling algorithm is proposed using only necessary pixels of image according to the length of input text to solve that problem. This significantly provides the output result with faster performance in run time even running the encryption part in sequential. Moreover, the proposed algorithm does not infect the visual quality of an image even replacing the pixel selection step of a New LSB-based framework.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117255057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web Content Outlier Mining using Machine Learning and Mathematical Approaches","authors":"Thinzar Tun, Khin Mo Mo Tun","doi":"10.1109/AITC.2019.8921085","DOIUrl":"https://doi.org/10.1109/AITC.2019.8921085","url":null,"abstract":"Due to the massive, dynamic and heterogeneous nature of the web, discovering outliers from the web is demanding than from the numeric dataset. On exploring for information in the web, the inappropriate irrelevant and redundant information may be retrieved to the user. So, it is a big challenge to get and access high quality information on the web effectively and efficiently without including irrelevant and redundant information. Mining web content outliers focus on mining inappropriate duplicate and irrelevant web pages from the other web pages under the same categories. Removing outliers from the web improves the accuracy of search results, decreases the complexity of time for indexing and complexity of time and saves the user time and effort. We applied the Latent Dirichlet Allocation method from the machine learning approaches and a mathematical approach named linear correlation method to move web content outliers. This system tends to improve F1-measure, accuracy results and reduce time complexity.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124971474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Historical Census using Graph-based Household Matching Method","authors":"Khin Su Mon Myint, W. Naing","doi":"10.1109/AITC.2019.8920872","DOIUrl":"https://doi.org/10.1109/AITC.2019.8920872","url":null,"abstract":"Population censuses are the most useful information for developing a country. It provides a valuable description of the state of a nation. These data can be applied for the country development planning or construction process. Linking records using population census data is the linking process of the same households from several censuses across the time. The challenges of linking census comprise un-reliable data quality, lots of common names. In ten years, a household may break into several households because of marriage or dead or movement in other households. A graph-based architecture using the unique ID to match households is presented in this paper. By using the graph-based household matching method, it can achieve single and multiple household matching and can also trace family household changes between two decades. The proposed system used the unique inhabitant ID and head ID to obtain accurate results and higher similarity. As a result, the proposed method obtains 61% of Accuracy which outperforms all other compared similarity methods.","PeriodicalId":388642,"journal":{"name":"2019 International Conference on Advanced Information Technologies (ICAIT)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114400721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}