Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P
{"title":"Graph-Based Rumor Detection on Social Media Using Posts and Reactions","authors":"Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P","doi":"10.12785/ijcds/160114","DOIUrl":"https://doi.org/10.12785/ijcds/160114","url":null,"abstract":": In this article, researchers deliver a novel method that makes use of graph-based contextual and semantic learning to detect rumors. Social media platforms are interconnected, so when an event occurs, similar news or user reactions with common interests are disseminated throughout the network. The presented research introduces an innovative graph-based method for identifying rumors on social media by analyzing both posts and reactions. Identifying and dealing with online rumors is an important and increasing di ffi culty. We use real-world social media data to create a solution based on data analysis. The process involves creating graphs, identifying bridge words, and selecting features. The proposed method shows better performance than the baselines, indicating its e ff ectiveness in addressing this significant issue. The method that is being o ff ered makes use of tweets and people’s replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this e ff ort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words co-occurrence graph results in the production of two prominent groups of significant words and bridge connection words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. The recommended technique is assessed by means of the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"148 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz
{"title":"An FPGA Implementation of Basic Video Processing and Timing Analysis for Real-Time Application","authors":"Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz","doi":"10.12785/ijcds/160131","DOIUrl":"https://doi.org/10.12785/ijcds/160131","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"5 S1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Deep Learning Architecture for Scalable Abstractive Summarization of Extensive Text Corpus","authors":"Krishna Dheeravath, S. Jessica Saritha","doi":"10.12785/ijcds/160126","DOIUrl":"https://doi.org/10.12785/ijcds/160126","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"510 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Approach for Aircraft Detection using VGG19 and OCSVM","authors":"Marwa A. Hameed, Zainab A. Khalaf","doi":"10.12785/ijcds/160109","DOIUrl":"https://doi.org/10.12785/ijcds/160109","url":null,"abstract":": Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation tra ffi c and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without a ff ecting the aircraft’s position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the e ff ectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the e ffi cacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"20 79","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry
{"title":"An Instance Segmentation Method for Nesting Green Sea Turtle’s Carapace using Mask R-CNN","authors":"Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry","doi":"10.12785/ijcds/160116","DOIUrl":"https://doi.org/10.12785/ijcds/160116","url":null,"abstract":": This research presents an improved instance segmentation method using Mask Region-based Convolutional Neural Network (Mask R-CNN) on nesting green sea turtles’ images. The goal is to achieve precise segmentation to produce a dataset fit for future re-identification tasks. Using this method, we can skip the labour-intensive and tedious task of manual segmentation by automatically extracting the carapace as the Region-of-Interest (RoI). The task is non-trivial as the image dataset contains noise, blurry edges, and low contrast between the target object and background. These image defects are due to several factors, including jittering footage due to camera motion, the nesting event occurring during a low-light environment, and the inherent limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera during our data collection. The CMOS sensor produces a high level of noise, which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality, causing di ffi culties when performing RoI segmentation of the carapaces. To address these challenges, this research proposes including Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing step to train the model. CLAHE enhances contrast and increases di ff erentiation between the carapace structure and the background elements. Our research findings demonstrate the e ff ectiveness of Mask R-CNN when combined with CLAHE as the data pre-processing step. With CLAHE technique, there is an average increase of 1.55% in Intersection over Union (IoU) value compared to using Mask R-CNN alone. The optimal configuration managed an IoU value of 93.35%.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"175 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Stage Gene Selection Technique For Identifying Significant Prognosis Biomarkers In Breast Cancer","authors":"Monika Lamba, Geetika Munjal, Yogita Gigras","doi":"10.12785/ijcds/160107","DOIUrl":"https://doi.org/10.12785/ijcds/160107","url":null,"abstract":": One crucial stage in the data preparation procedure for breast cancer classification involves extracting a selection of meaningful genes from microarray gene expression data. This stage is crucial because it discovers genes whose expression patterns can di ff erentiate between di ff erent types or stages of breast cancer. Two highly e ff ective algorithms, CONSISTENCY-BFS and CFS-BFS, have been developed for gene selection. These algorithms are designed to identify the genes that are most crucial in distinguishing between di ff erent types and stages of breast cancer by analysing large volumes of genetic data. A noteworthy advancement is a refined 2-Stage Gene Selection technique specifically designed for predicting subtypes in breast cancer. The initial phase of the 2-Stage Gene Selection (GeS) approach relies on the CFS-BFS algorithm, which plays a crucial role in e ff ectively eliminating unnecessary, distracting, and redundant genes. The initial filtering process plays a crucial role in simplifying the dataset and identifying the genes that have the highest potential to shed light on the category of breast cancer. The CONSISTENCY-BFS algorithm guarantees that only the most pertinent genes are retained by further refining the gene selection process. This stage is essential for eliminating any remaining uncertainty and enhancing the overall e ffi ciency of the algorithm. This innovative approach represents a significant advancement in the field of bioinformatics as it o ff ers a more accurate and targeted method for selecting genes based on their relevance to breast cancer classification. When the 2-Stage GeS is constructed using Hidden Weight Naive Bayes, remarkably, it yields more precise and dependable outcomes. The indicators that demonstrate positive outcomes encompass recollection, accuracy, f-score, and fallout rankings. The Kaplan-Meier Survival Model was employed to further validate the top four genes, namely E2F3, PSMC3IP, GINS1, and PLAGL2. Presumably, precision therapy will specifically focus on targeting the genes E2F3 and GINS1.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"295 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement in Depth–of–Return-Loss and Augmentation of Gain-bandwidth with Defected Ground Structure For Low Cost Single Element mm–Wave Antenna","authors":"Simerpreet Singh, Gaurav Sethi, Jaspal Singh Khinda","doi":"10.12785/ijcds/160108","DOIUrl":"https://doi.org/10.12785/ijcds/160108","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"77 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Optic Disc Detection and Segmentation in Retinal Fundus Images Utilizing You Only Look Once (YOLO) Method","authors":"Zahraa Jabbar Hussein, Enas Hamood Al-Saadi","doi":"10.12785/ijcds/160139","DOIUrl":"https://doi.org/10.12785/ijcds/160139","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141710869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Landscape of Health Information Systems in the Philippines: A Methodical Analysis of Features and Challenges","authors":"Mia Amor C. Tinam-isan, January F. Naga","doi":"10.12785/ijcds/160118","DOIUrl":"https://doi.org/10.12785/ijcds/160118","url":null,"abstract":": A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one’s distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many di ffi culties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"80 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deduplication using Modified Dynamic File Chunking for Big Data Mining","authors":"Saja Taha Ahmed","doi":"10.12785/ijcds/160105","DOIUrl":"https://doi.org/10.12785/ijcds/160105","url":null,"abstract":": The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is suggested in this study. The file is split into blocks of a predefined size by the predefined-size DeDuplication algorithm. The primary problem with this strategy is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in a lower DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So, a comparison is made between the proposed fixed and dynamic deduplication stating that dynamic chunking has less average chunk size and can gain a much higher deduplication ratio.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"91 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141699338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}