Suhaib Kh. Hamed, Mohd Juzaiddin Ab Aziz, Mohd Ridzwan Yaakub
{"title":"DISINFORMATION DETECTION ABOUT ISLAMIC ISSUES ON SOCIAL MEDIA USING DEEP LEARNING TECHNIQUES","authors":"Suhaib Kh. Hamed, Mohd Juzaiddin Ab Aziz, Mohd Ridzwan Yaakub","doi":"10.22452/mjcs.vol36no3.3","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no3.3","url":null,"abstract":"Nowadays, many people receive news and information about what is happening around them from social media networks. These social media platforms are available free of charge and allow anyone to post news or information or express their opinion without any restrictions or verification, thus contributing to the dissemination of disinformation. Recently, disinformation about Islam has spread through pages and groups on social media dedicated to attacking the Islamic religion. Many studies have provided models for detecting fake news or misleading information in many domains, such as political, social, economic, and medical, except in the Islamic domain. Due to this negative impact of spreading disinformation targeting the Islamic religion, there is an increase in Islamophobia, which threatens societal peace. In this paper, we present a Bidirectional Long Short-Term Memory-based model trained on an Islamic dataset (RIDI) that was collected and labeled by two separate specialized groups. In addition, using a pre-trained word-embedding model will generate Out-Of-Vocabulary, because it deals with a specific domain. To address this issue, we have retrained the pre-trained Glove model on Islamic documents using the Mittens method. The results of the experiments proved that our proposed model based on Bidirectional Long Short-Term Memory with the retrained Glove model on the Islamic articles is efficient in dealing with text sequences better than unidirectional models and provides a detection accuracy of 95.42% of Area under the ROC Curve measure compared to the other models.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46580245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norjihan Abdul Ghani, Uzair Iqbal, Suraya Hamid, Zulkarnain Jaafar, F. Yusop, Muneer Ahmad
{"title":"METHODICAL EVALUATION OF HEALTHCARE INTELLIGENCE FOR HUMAN LIFE DISEASE DETECTION","authors":"Norjihan Abdul Ghani, Uzair Iqbal, Suraya Hamid, Zulkarnain Jaafar, F. Yusop, Muneer Ahmad","doi":"10.22452/mjcs.vol36no3.1","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no3.1","url":null,"abstract":"Event intelligence for early diseases detection is highly demanded in current era and it requires reliable technology-oriented applications. Trusted emerging technologies play a vital role in modern healthcare systems for early diagnoses of different medical conditions because it helps to speed up the treatment process. Despite the enhancement of current healthcare systems, robust diagnosis of different type of diseases for intra-patients (outside of hospital settings) is still considered as a difficult task. However, the continuous evolution of trusted technologies in health sectors narrate the reboot process which could upgrades the healthcare service provision as the trusted next generation health units. In order to assist the healthcare providers to carry out early diseases’ detection for intra-patient clients, we designed this systematic review. We extracted 40 studies from the databases i.e. IEEE Xplore, Springer, Science direct and Scopus, from March 2016 and February 2021, and we formulated our research questions based on these studies. Subsequently, we rectified these studies using two filtration schemes namely, inclusion-omission policy and quality assessment, and as a result, we obtained 19 studies which successfully mapped our defined research questions .We found that these 19 studies clearly highlighted the different trusted architecture of internet of things, mobile cloud computing and machine learning, that are significantly beneficial to diagnose medical conditions for the intra-patient clients such as neurological diseases, cardiac malfunctions and other common diseases.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41645815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IMPROVING COVERAGE AND NOVELTY OF ABSTRACTIVE TEXT SUMMARIZATION USING TRANSFER LEARNING AND DIVIDE AND CONQUER APPROACHES","authors":"Ayham Alomari, N. Idris, Aznul Qalid, I. Alsmadi","doi":"10.22452/mjcs.vol36no3.4","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no3.4","url":null,"abstract":"Automatic Text Summarization (ATS) models yield outcomes with insufficient coverage of crucial details and poor degrees of novelty. The first issue resulted from the lengthy input, while the second problem resulted from the characteristics of the training dataset itself. This research employs the divide-and-conquer approach to address the first issue by breaking the lengthy input into smaller pieces to be summarized, followed by the conquest of the results in order to cover more significant details. For the second challenge, these chunks are summarized by models trained on datasets with higher novelty levels in order to produce more human-like and concise summaries with more novel words that do not appear in the input article. The results demonstrate an improvement in both coverage and novelty levels. Moreover, we defined a new metric to measure the novelty of the summary. Finally, we investigated the findings to discover whether the novelty is influenced more by the dataset itself, as in CNN/DM, or by the training model and its training objective, as in Pegasus.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49657855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ENHANCING SECURITY OF RFID-ENABLED IOT SUPPLY CHAIN","authors":"H. Turksonmez, M. H. Ozcanhan","doi":"10.22452/mjcs.vol36no3.5","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no3.5","url":null,"abstract":"In addition to its benefits, the popular Internet of Things (IoT) technology has also opened the way to novel security and privacy issues. The basis of IoT security and privacy starts with trust in the IoT hardware and its supply chain. Counterfeiting, cloning, tampering of hardware, theft, and lost issues in the IoT supply chain have to be addressed, in order to ensure reliable IoT industry growth. In four previous works, radio-frequency identification (RFID)-enabled solutions have been proposed by the same authors, aimed to bring security to the entire IoT supply chain. The works propose a new RFID-traceable hardware architecture, device authentication, and supply chain tracing procedure. In each of these works, a variant of the same is proposed. However, the same variant of lightweight RFID authentication protocol coupled with the offline supply chain proposed in these works has such security vulnerabilities that make the whole supply chain unsafe. In our present work, an online supply chain hop-tracking procedure supported by a novel RFID mutual authentication protocol, based on the strong matching of the RFID readers-their operators-central database present at the transfer hops is proposed. Our proposed Strong RFID Authentication Protocol (STRAP) has been verified by two well-accepted formal protocol analyzers Scyther and AVISPA. The verification results demonstrate that STRAP overcomes the previous works’ vulnerabilities. Furthermore, our proposed novel online supply chain tracing procedure supporting STRAP removes the previous offline supply chain tracing procedure weaknesses.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47383187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Imran, Maizatul Akmar Ismail, Suraya Hamid, Mohammad Hairul Nizam Md Nasir
{"title":"A TRACE CLUSTERING FRAMEWORK FOR IMPROVING THE BEHAVIORAL AND STRUCTURAL QUALITY OF PROCESS MODELS IN PROCESS MINING","authors":"Mohammad Imran, Maizatul Akmar Ismail, Suraya Hamid, Mohammad Hairul Nizam Md Nasir","doi":"10.22452/mjcs.vol36no3.2","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no3.2","url":null,"abstract":"Process mining (PM) techniques are increasingly used to enhance operational procedures. However, applying PM to unstructured processes can result in complex process models that are difficult to interpret. Trace clustering is the most prevalent method for handling this complexity, but it has limitations in dealing with event logs that contain many activities with varied behaviours. In such cases, trace clustering can produce inaccurate process models that are expensive in terms of time performance. Therefore, it is crucial to develop a trace clustering solution that is optimal in terms of behavioural and structural quality of process models while being efficient in terms of time performance. In this study, we introduce a refined trace clustering framework with an integration of log abstraction and decomposition technique that improves the precision of process models by 38%, leading to a 40% increase in the f-score. The proposed framework also produces process models that are 38% simpler than those produced by baseline approaches. More importantly, our framework achieves a remarkable 89% improvement in time performance, making it a valuable contribution to the field of process mining. Future works include exploring the scalability of the proposed framework against a wider range of complex event logs and testing the framework to validate its effectiveness in practical applications.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47589349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nafisse Samadi, Sri Devi Ravana (Corresponding Author)
{"title":"XML CLUSTERING FRAMEWORK BASED ON DOCUMENT CONTENT AND STRUCTURE IN A HETEROGENEOUS DIGITAL LIBRARY","authors":"Nafisse Samadi, Sri Devi Ravana (Corresponding Author)","doi":"10.22452/mjcs.vol36no2.2","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no2.2","url":null,"abstract":"As textually published information is increasing in digital libraries, efficient retrieval methods are required. Textual documents in a digital library are available in various structures and contents. It is possible to represent these documents with hierarchical levels of granularity when these are organized in XML structure to improve precision by focused retrieval. By this means, contextual elements of each document can be retrieved from a known structure. One solution for retrieving these elements is clustering from a combination of Content and Structural similarities. To achieve this, a novel two-level clustering framework based on Content and Structure is proposed. The framework decomposes a document into meaningful structural units and analyzes all its rich text in its own structure. The quality of the proposed framework was experimented on a heterogeneous XML document collection, having varieties of data sources, structures, and content, be represented as a sample of a real digital library. This collection was made with capabilities to test all of our objectives. The clustering results were evaluated by the Entropy criterion. Finally, the Content and Structure clustering was compared with the usual clustering based on the Content Only to prove the efficacy of considering structural features against the existing Content Only methods in the retrieval process. The total Entropy results of the two-level Content and Structural clustering are almost twice better than the Content Only clustering approach. Consequently, the proposed framework has the ability to improve Information Retrieval systems from two points of view: i) considering the structural aspect of text-rich documents in the retrieval process, and ii) replacing the document-level retrieval with the element-level retrieval.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46148579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yin Kia Chiam (Corresponding Author), Shahr Banoo Muradi
{"title":"SYSTEMATIC SELECTION OF BLOCKCHAIN PLATFORMS USING FUZZY AHP-TOPSIS","authors":"Yin Kia Chiam (Corresponding Author), Shahr Banoo Muradi","doi":"10.22452/mjcs.vol36no2.1","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no2.1","url":null,"abstract":"Various businesses and industries such as financial, medical care management, supply chain management, data management, Internet of Things (IoT) and government supremacy, have been using blockchain technology to develop systems. During the selection of blockchain platforms, many criteria need to be taken into account depending on the organization, project and use case requirements. This study proposes a systematic selection method based on the Fuzzy AHP-TOPSIS approach which compares and selects alternative blockchain platforms against a set of selection criteria that cover both features and non-functional properties. A case study was conducted to evaluate the applicability of the proposed selection method. The proposed selection method which consists of three main stages was applied for the comparison and selection of the most appropriate blockchain platform for two projects. In the case study, three blockchain platforms were selected and ranked for each project based on selection criteria derived from the project requirements. Both project representatives showed strong agreement with the applicability aspects of the proposed selection method. It is concluded that the proposed selection criteria and selection method can be applied practically to support the decision-makers in blockchain platform selection for real-world projects.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43145537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chun Jiann Lim, Moon Ting Su (Corresponding Author)
{"title":"RECOMMENDING JAVA API METHODS BASED ON PROGRAMMING TASK DESCRIPTIONS BY NOVICE PROGRAMMERS","authors":"Chun Jiann Lim, Moon Ting Su (Corresponding Author)","doi":"10.22452/mjcs.vol36no2.3","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no2.3","url":null,"abstract":"The overwhelming number of Application Programming Interfaces (APIs) and the lexical gap between novices’ programming task descriptions in their search queries and API documentations deter novice programmers from finding suitable API methods to be used in their code. To address the lexical gap, this study investigated novice programmers’ descriptions of their programming tasks and used the found insights in a novel approach (APIFind) for recommending relevant API methods for the programming tasks. Queries written by novice programmers were collected and analysed using term frequency and constituency parsing. Four common patterns related to the return type of an API method and/or API class that provides an implementation for the API method were found and captured in the Novice Programming Task Description Model (NPTDM). APIFind uses NPTDM that was operationalised in a rule-based module, a WordNet map of API word-synonyms, a programming task dataset comprising the collected queries, a Java API class and method repository, a Stack Overflow Q&A thread repository, and the BM25 model in Apache Lucene, to produce the top-5 API methods relevant to a search query. Benchmarking results using mean average precision @ 5 and mean reciprocal rank @ 5 as the evaluation metrics show that APIFind outperformed BIKER and CROKAGE when the novice queries test dataset was used. It performed slightly better than BIKER but slightly worse than CROKAGE when the reduced BIKER test dataset was used. In conclusion, common patterns exist in novice programmers’ search queries and can be used in API recommendations for novice programmers.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49651636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Mallick, Parimal Kumar Giri (Corresponding Author), Sarojananda Mishra
{"title":"A LOCALLY AND GLOBALLY TUNED METAHEURISTIC OPTIMIZATION FOR OVERLAPPING COMMUNITY DETECTION","authors":"C. Mallick, Parimal Kumar Giri (Corresponding Author), Sarojananda Mishra","doi":"10.22452/mjcs.vol36no2.4","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no2.4","url":null,"abstract":"Many people use online social networks to share their opinions and information in this digital age. The number of people engaged and their dynamic nature pose a major challenge for social network analysis (SNA). Community detection is one of the most critical and fascinating issues in social network analysis. Researchers frequently employ node features and topological structures to recognize important and meaningful performance in order to locate non-overlapping communities. We introduce a locally and globally tuned multi-objective biogeography-based optimization (LGMBBO) technique in this research for detecting overlapping communities based on the number of connections and node similarity. Four real- world online social network datasets were used in the experiment to assess the quality of both overlapping and non-overlapping partitions. As a result, the model generates a set of solutions that have the best topological structure of a network with node properties. The suggested model will increase their productivity and enhance their abilities to identify significant and pertinent communities.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44639133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Basavaraj S. Anami, Chetan V. Sagarnal (Corresponding Author)
{"title":"A FUSION OF HAND-CRAFTED FEATURES AND DEEP NEURAL NETWORK FOR INDOOR SCENE CLASSIFICATION","authors":"Basavaraj S. Anami, Chetan V. Sagarnal (Corresponding Author)","doi":"10.22452/mjcs.vol36no2.5","DOIUrl":"https://doi.org/10.22452/mjcs.vol36no2.5","url":null,"abstract":"Convolutional neural networks (CNN) have proved to be the best choice left for image classification tasks. However, hand-crafted features cannot be ignored as these are the basic to conventional image processing. Hand-crafted features provide a priori information that often acts as the contemporary solution to CNN in image classification, and hence an attempt is made to fuse the two. This paper gives a feature fusion approach to combine CNN and hand-crafted features. The proposed methodology uses two stages, where the first stage comprises feature encoder that encodes non-normalized features of CNN, which utilizes edge, texture, and local features. The fusion of handcrafted features with CNN features is carried out in the second Hand-crafted crafted features are validated that helped CNN to perform better. Experimental results reveal that the proposed methodology improves over the original Efficient-Net(E) on the MIT-67 dataset and achieved an average accuracy of 93.87%. The results are compared with state-of-the-art methods.","PeriodicalId":49894,"journal":{"name":"Malaysian Journal of Computer Science","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43026415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}