{"title":"Memory Usage Comparison in an Android Application: Basic Object-Oriented Programming vs Decorator Design Pattern: Coding styles for keeping low memory usage in a mobile application.","authors":"Wasana Ngaogate","doi":"10.1145/3548636.3548655","DOIUrl":"https://doi.org/10.1145/3548636.3548655","url":null,"abstract":"This paper explores how an Android application consumes memory when it is developed by two approaches: basic object-oriented programming versus decorator design pattern. Somehow programmers develop applications based on their coding experience without concern of memory usage especially in mobile applications which memory consumption is a critical issue. The deployed Android application of a list of temples which join a Candle Festival is selected as a case study. New functions are assumingly required by users so that the application could provide more services. Two class diagrams are designed based on basic object-oriented programming and based on decorator design pattern before implementation of the two approaches. The problem is what is the better approach for handling an increasing number of services after the application has been deployed. So that the application still consumes less memory as possible. The case-study applications retrieve information from a free REST API. Memory usage is measured by using an emulator and a physical device. The result shows that an Android application which developed by using decorator design pattern consumes more memory although its codes are cleaner and easier to read. Therefore, when a developer applies decorator design pattern, they should pay more attention on unexpected effect of the design pattern. Research in impact of design patterns should also be investigated more.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121744720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-path Android Malware Detection Based on N-gram Feature Weighting","authors":"Min Sun, Danni Zhang","doi":"10.1145/3548636.3548651","DOIUrl":"https://doi.org/10.1145/3548636.3548651","url":null,"abstract":"In recent years, with the full popularity of Android system and applications, the types and number of Android malicious applications also show explosive growth, and more efficient detection technology is urgently needed to identify malicious software. In view of the current research on N-gram features is relatively single, in order to make more comprehensive use of N-gram features and explore the potential relationship between features and attributes of applications, this paper proposes a two-path Android malware detection model based on N-gram feature weighting, and achieves N-gram feature extraction in two different ways by setting an application file threshold. Finally, Neural network is used to classify the fused features. Testing results of 1205 malicious samples and 1084 benign samples shows that the detection accuracy of the model was up to 99.2%. At the same time, this experiment further verify the effectiveness of relevant improvements, and the results show that compared with traditional machine learning algorithms, this model has higher adaptability and accuracy.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127855157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid approach for text similarity detection in Vietnamese based on Sentence-BERT and WordNet","authors":"Son Cao, Huy V. Vo, Hang Le, D. Dinh","doi":"10.1145/3548636.3548645","DOIUrl":"https://doi.org/10.1145/3548636.3548645","url":null,"abstract":"In this paper, we explore the task of similarity detection, which determines whether two sentences have the same meaning. Although the task has shown to be important in many natural language processing applications, not much work has been done in Vietnamese. We present an approach based on Sentence-BERT (SBERT) model. Leveraging the pre-trained model and combining it with linguistic knowledge (WordNet), we then tested it on two popular Vietnamese datasets: vnPara and VNPC. Our best model achieves 97.62% F1 score on vnPara and 95.31% F1 score on VNPC.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123554041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tazar Hussain, Chris D. Nugent, Jun Liu, Alfie Beard, Liming Chen, A. Moore
{"title":"An Attack Impact and Host Importance based Approach to Intrusion Response Action Selection","authors":"Tazar Hussain, Chris D. Nugent, Jun Liu, Alfie Beard, Liming Chen, A. Moore","doi":"10.1145/3548636.3548649","DOIUrl":"https://doi.org/10.1145/3548636.3548649","url":null,"abstract":"Selecting appropriate actions is crucial for building effective Intrusion Response Systems (IRS) that can counter intrusions according to their priority level. Currently, the priority level of intrusions is determined manually, in a static manner, which is time consuming, ineffective and cannot scale with the growing number of attacks. In this paper we present an effective event prioritization methodology by encoding domain knowledge, namely attack impact and host importance, into features in terms of the confidentiality, integrity and availability (CIA). The proposed approach is demonstrated using a testbed architecture where a total of six features are generated from the domain knowledge and are labeled with appropriate response options. One set of features encodes attack impact in terms of its potential damage and its ability to propagate and another set of features encodes host importance in terms of data sensitivity, service criticality, number of connections and vulnerabilities on the basis of the CIA factors. The case study results indicate that the generated features help security analysts to select appropriate response options according to the priority level of events. Additionally, as a result of the methodology a labelled Intrusion Response (IR) dataset is generated. In future work we aim to use machine learning to analyze this dataset to infer actions automatically.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rangrak Maitriboriruks, Patchariya Piya-Aromrat, Y. Limpiyakorn
{"title":"Smart Conveyor Belt Sushi Bill Payment with a Mobile Shot","authors":"Rangrak Maitriboriruks, Patchariya Piya-Aromrat, Y. Limpiyakorn","doi":"10.1145/3548636.3548637","DOIUrl":"https://doi.org/10.1145/3548636.3548637","url":null,"abstract":"Organization must automate wherever and whenever they can, particularly during today's global changes in daily lifestyles. Trends regarding the use of technology, especially AI has emerged as a key enabler for disruptive innovation. This paper thus presents the application programming interface of object detector implemented with YOLOv4 and OpenCV for classifying the prices of sushi plates distinguished by colors. The object detector is part of the smart cross-platform mobile application to facilitate billing process for conveyor belt sushi business. The frontend is developed with Flutter to build single codebase for UIs. To handle the variants of image colors resulting from the use of different mobile cameras, color transfer is used for transferring the image dataset colors to images captured by users. Microservices architecture is adopted for the backend. Orchestration of YOLOv4, OpenCV and Spring Boot REST API will create APIs to calculate food cost, generate QR code for bill payment, and maintain customer membership benefits. The constructed object detection model achieved the precision of 97%, recall of 97%, F1-score of 97% and mAP of 97.3%. The smart billing system presented in this work would accelerate the workflow, increase productivity, reduce waste and drive moving for contactless society.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130483140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nam T. Doan, Thi-Anh-Thi Le, An-Vinh Lương, Dinh Dien
{"title":"Combining Latent Semantic Analysis and Pre-trained Model for Vietnamese Text Readability Assessment: Combining Statistical Semantic Embeddings and Pre-trained Model for Vietnamese Long-Sequence Readability Assessment","authors":"Nam T. Doan, Thi-Anh-Thi Le, An-Vinh Lương, Dinh Dien","doi":"10.1145/3548636.3548643","DOIUrl":"https://doi.org/10.1145/3548636.3548643","url":null,"abstract":"Together with the rapid development of text processing, readability assessment is an important and challenging task of measuring how easy or difficult it is to read a text. Despite the foundation and enhancement of this task in high-resource languages such as English where there are a ton of NLP tools and corpus, this task is not an advantage for low-resource languages, especially Vietnamese. Most previous studies for Vietnamese text readability assessment focus on shallow text characteristics, which have yet to address deeper readability features. In our study, we propose a novel finding in Vietnamese to create the construction of features reflecting in terms of semantics. In view of this, we notice that the difficulty level of terms affects the difficulty level of knowledge that strongly involves text comprehension. Particularly, our approach based on the difficulty distribution of terms in a text generated by Latent Semantic Analysis (LSA) technique decreases the dependence of experts in annotating and discovering the typical feature in a narrow domain. Our proposed feature is efficient to be considered as a new and automatic feature for Vietnamese text readability assessment. Furthermore, LSA is a statistical approach that is more stable and feasible for low-resource languages. In addition, we also integrate PhoBERT, a pre-trained language model for Vietnamese, to generate the bidirectional contextual representation of a word for Vietnamese long-sequence as a semantic feature. Through the experiments in Vietnamese readability dataset, our proposed approach achieves promising performance against the strong competitive baselines. The best performance with up to an accuracy of 94.52% and a weighted F1 score of 94.09%.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126886989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stock price prediction under multi-frequency model - based on attention state-frequency memory network","authors":"Wei Zhou, Yuting Pan, Zhaoxia Wu","doi":"10.1145/3548636.3548654","DOIUrl":"https://doi.org/10.1145/3548636.3548654","url":null,"abstract":"Stock price forecasting is the most typical forecasting task in financial time series. However, stock prices are influenced by a variety of factors, with short-term and long-term trading activity being among the most important factors affecting price changes. The trading activity in these different frequency patterns is eventually reflected in the stock price data. Therefore, if the model identifies information about the potential multiple frequencies of the time series data in the process of prediction, it can enable the model to better learn the features of the time series data. In addition, stock price data are non-linear and non-stationary in nature, which makes it a challenging task to predict future trends. We used the attention state-frequency memory neural network (A-SFM) to accomplish the above objectives. The A-SFM realizes the decomposition of frequency by Fourier transform, and learns the state and frequency information by the powerful learning ability of neural network. In addition, we add an attention mechanism to the structure of the model to extract and learn the important parts of the price information. In this paper, we use the A-SFM model to capture the state and frequency information inherent in the series from past financial market data and use this information to make short- and long-term forecasts.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130113653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang-Jui Shen, Yusong Tan, Pan Dong, Jun Ma, Xiaoling Li
{"title":"A Simple and Efficient Object-Capability Revocation Method","authors":"Liang-Jui Shen, Yusong Tan, Pan Dong, Jun Ma, Xiaoling Li","doi":"10.1145/3548636.3548656","DOIUrl":"https://doi.org/10.1145/3548636.3548656","url":null,"abstract":"Revocation is important but difficult in object-capability based systems. As an essential mechanism of capability management, revocation provides the basis for canceling of granted access rights and reclaiming of shared resources. However, efficient revocation of capabilities is difficult to achieve due to the considerations for selective revocation and safety. In this paper, we propose an simple and efficient object-capability revocation method. We are inspired by the classic garbage collection algorithm Mark-and-Sweep algorithm, and decouple revocation process to two phases. In the first phase, capabilities are centrally set to be invalid or masked, by updating related fields in capability tap. In the second phase, this method trigger Rust's memory reclaim mechanism to sweep the disabled capabilities safely. Additionally, the sweep progress can be done as lazy sweeping to support fast revocation returning. Our revocation method can support selective revocation more efficiently than the existing methods, and the revocation process is concise and safe.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115131289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-label Learning with User Credit Data in China Based on MLKNN","authors":"Zhu-Jin Zhang, Lu Han, Muzi Chen","doi":"10.1145/3548636.3548652","DOIUrl":"https://doi.org/10.1145/3548636.3548652","url":null,"abstract":"Aiming at the problem of numerous variables in credit data, a large amount of sample data, and inability to intuitively reflect user portraits. This paper uses the MLKNN algorithm to perform multi-label learning on the credit data. According to the results of the algorithm training under 24 sets of k values, the optimal number of neighbor samples of the algorithm on the sample set is 14. On this basis, this paper further analyzes The general portrait of credit users in my country has the following characteristics: more than 50% of credit users are users with stable personal development, low frequency of credit activities, and low-to-medium attention to credit status. Meanwhile, we find that the users with higher frequency of credit activities pay more attention to credit status. This research can provide some reference for commercial banks or other financial institutions in lending and credit management.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121157642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quality Changes of Image from Total Variation to Nonlinear Sparsifying Transform for Sparse-view CT Reconstruction","authors":"Jian Dong, Siyuan Zhang, Lin He","doi":"10.1145/3548636.3548641","DOIUrl":"https://doi.org/10.1145/3548636.3548641","url":null,"abstract":"Sparse-view CT has been widely studied as an effective strategy for reducing radiation dose to patients. Total variation (TV) minimization, which is most extensively studied among the existing compressed sensing (CS) techniques, has been recognized as a powerful tool for dealing with the inverse problem of sparse-view image reconstruction. However, in recent years, the drawbacks of TV are being increasingly reported, such as appearance of patchy artifacts, depict of incorrect object boundaries, and loss in image textures. In order to address these drawbacks, a series of advanced algorithms using nonlinear sparsifying transform (NLST) have been proposed very recently. The NLST-based CS is based on a different framework from the TV, and it achieves an improvement in image quality. Since it is a relatively newly proposed idea, within the scope of our knowledge, there exist few literatures that discusses comprehensively how the image quality improvement occurs in comparison with the conventional TV method. In this study, we investigated the image quality differences between the conventional TV minimization and the NLST-based CS, as well as image quality differences among different kinds of NLST-based CS algorithms in the sparse-view CT image reconstruction. More specifically, image reconstructions of actual CT images of different body parts were carried out to demonstrate the image quality differences. Through comparative experiments, we conclude that the NLST-based CS method is superior to the TV method in the task of image reconstruction for sparse-view CT.","PeriodicalId":384376,"journal":{"name":"Proceedings of the 4th International Conference on Information Technology and Computer Communications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122855055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}