{"title":"Brain tumor classification using ResNet50-convolutional block attention module","authors":"Oladosu Oyebisi Oladimeji, A. Ibitoye","doi":"10.1108/aci-09-2023-0022","DOIUrl":"https://doi.org/10.1108/aci-09-2023-0022","url":null,"abstract":"PurposeDiagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.Design/methodology/approachTo selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.FindingsThe ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.Practical implicationsSince ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.Originality/valueThis research has not been published anywhere else.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"39 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138949796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Worapan Kusakunniran, P. Saiviroonporn, T. Siriapisith, T. Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama, Pakorn Yodprom
{"title":"Automatic measurement of cardiothoracic ratio in chest x-ray images with ProGAN-generated dataset","authors":"Worapan Kusakunniran, P. Saiviroonporn, T. Siriapisith, T. Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama, Pakorn Yodprom","doi":"10.1108/aci-11-2022-0322","DOIUrl":"https://doi.org/10.1108/aci-11-2022-0322","url":null,"abstract":"PurposeThe cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart and a transverse dimension of chest. The cardiomegaly is identified when the ratio is larger than a cut-off threshold. This paper aims to propose a solution to calculate the ratio for classifying the cardiomegaly in chest x-ray images.Design/methodology/approachThe proposed method begins with constructing lung and heart segmentation models based on U-Net architecture using the publicly available datasets with the groundtruth of heart and lung masks. The ratio is then calculated using the sizes of segmented lung and heart areas. In addition, Progressive Growing of GANs (PGAN) is adopted here for constructing the new dataset containing chest x-ray images of three classes including male normal, female normal and cardiomegaly classes. This dataset is then used for evaluating the proposed solution. Also, the proposed solution is used to evaluate the quality of chest x-ray images generated from PGAN.FindingsIn the experiments, the trained models are applied to segment regions of heart and lung in chest x-ray images on the self-collected dataset. The calculated CTR values are compared with the values that are manually measured by human experts. The average error is 3.08%. Then, the models are also applied to segment regions of heart and lung for the CTR calculation, on the dataset computed by PGAN. Then, the cardiomegaly is determined using various attempts of different cut-off threshold values. With the standard cut-off at 0.50, the proposed method achieves 94.61% accuracy, 88.31% sensitivity and 94.20% specificity.Originality/valueThe proposed solution is demonstrated to be robust across unseen datasets for the segmentation, CTR calculation and cardiomegaly classification, including the dataset generated from PGAN. The cut-off value can be adjusted to be lower than 0.50 for increasing the sensitivity. For example, the sensitivity of 97.04% can be achieved at the cut-off of 0.45. However, the specificity is decreased from 94.20% to 79.78%.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49561142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cyber threat: its origins and consequence and the use of qualitative and quantitative methods in cyber risk assessment","authors":"James R. Crotty, E. Daniel","doi":"10.1108/aci-07-2022-0178","DOIUrl":"https://doi.org/10.1108/aci-07-2022-0178","url":null,"abstract":"PurposeConsumers increasingly rely on organisations for online services and data storage while these same institutions seek to digitise the information assets they hold to create economic value. Cybersecurity failures arising from malicious or accidental actions can lead to significant reputational and financial loss which organisations must guard against. Despite having some critical weaknesses, qualitative cybersecurity risk analysis is widely used in developing cybersecurity plans. This research explores these weaknesses, considers how quantitative methods might address the constraints and seeks the insights and recommendations of leading cybersecurity practitioners on the use of qualitative and quantitative cyber risk assessment methods.Design/methodology/approachThe study is based upon a literature review and thematic analysis of in-depth qualitative interviews with 16 senior cybersecurity practitioners representing financial services and advisory companies from across the world.FindingsWhile most organisations continue to rely on qualitative methods for cybersecurity risk assessment, some are also actively using quantitative approaches to enhance their cybersecurity planning efforts. The primary recommendation of this paper is that organisations should adopt both a qualitative and quantitative cyber risk assessment approach.Originality/valueThis work provides the first insight into how senior practitioners are using and combining qualitative and quantitative cybersecurity risk assessment, and highlights the need for in-depth comparisons of these two different approaches.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44565165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, T. Siriapisith, N. Tesavibul, N. Phasukkijwatana, S. Prakhunhungsit, Sutasinee Boonsopon
{"title":"Detecting and staging diabetic retinopathy in retinal images using multi-branch CNN","authors":"Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, T. Siriapisith, N. Tesavibul, N. Phasukkijwatana, S. Prakhunhungsit, Sutasinee Boonsopon","doi":"10.1108/aci-06-2022-0150","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0150","url":null,"abstract":"PurposeThis paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could classify input retinal images into a normal class or an abnormal class, which would be further split into four stages of abnormalities automatically.Design/methodology/approachThe proposed solution is developed based on a newly proposed CNN architecture, namely, DeepRoot. It consists of one main branch, which is connected by two side branches. The main branch is responsible for the primary feature extractor of both high-level and low-level features of retinal images. Then, the side branches further extract more complex and detailed features from the features outputted from the main branch. They are designed to capture details of small traces of DR in retinal images, using modified zoom-in/zoom-out and attention layers.FindingsThe proposed method is trained, validated and tested on the Kaggle dataset. The regularization of the trained model is evaluated using unseen data samples, which were self-collected from a real scenario from a hospital. It achieves a promising performance with a sensitivity of 98.18% under the two classes scenario.Originality/valueThe new CNN-based architecture (i.e. DeepRoot) is introduced with the concept of a multi-branch network. It could assist in solving a problem of an unbalanced dataset, especially when there are common characteristics across different classes (i.e. four stages of DR). Different classes could be outputted at different depths of the network.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49366416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kittisak Chotikkakamthorn, P. Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta, Paitoon Benjapornlert
{"title":"A lightweight deep learning approach to mouth segmentation in color images","authors":"Kittisak Chotikkakamthorn, P. Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta, Paitoon Benjapornlert","doi":"10.1108/aci-08-2022-0225","DOIUrl":"https://doi.org/10.1108/aci-08-2022-0225","url":null,"abstract":"PurposeMouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently, deep learning methods effectively solved mouth segmentation problems with state-of-the-art performances. This study presents a modified Mobile DeepLabV3 based technique with a comprehensive evaluation based on mouth datasets.Design/methodology/approachThis paper presents a novel approach to mouth segmentation by Mobile DeepLabV3 technique with integrating decode and auxiliary heads. Extensive data augmentation, online hard example mining (OHEM) and transfer learning have been applied. CelebAMask-HQ and the mouth dataset from 15 healthy subjects in the department of rehabilitation medicine, Ramathibodi hospital, are used in validation for mouth segmentation performance.FindingsExtensive data augmentation, OHEM and transfer learning had been performed in this study. This technique achieved better performance on CelebAMask-HQ than existing segmentation techniques with a mean Jaccard similarity coefficient (JSC), mean classification accuracy and mean Dice similarity coefficient (DSC) of 0.8640, 93.34% and 0.9267, respectively. This technique also achieved better performance on the mouth dataset with a mean JSC, mean classification accuracy and mean DSC of 0.8834, 94.87% and 0.9367, respectively. The proposed technique achieved inference time usage per image of 48.12 ms.Originality/valueThe modified Mobile DeepLabV3 technique was developed with extensive data augmentation, OHEM and transfer learning. This technique gained better mouth segmentation performance than existing techniques. This makes it suitable for implementation in further lip-reading applications.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43420892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of chatbots for customer service in MSMEs","authors":"Jorge Cordero, L. Barba-Guaman, Franco Guamán","doi":"10.1108/aci-06-2022-0148","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0148","url":null,"abstract":"PurposeThis research work aims to arise from developing new communication channels for customer service in micro, small and medium enterprises (MSMEs), such as chatbots. In particular, the results of the usability testing of three chatbots implemented in MSMEs are presented.Design/methodology/approachThe methodology employed includes participants, chatbot development platform, research methodology, software development methodology and usability test to contextualize the study's results.FindingsBased on the results obtained from the System Usability Scale (SUS) and considering the accuracy of the chatbot's responses, it is concluded that the level of satisfaction in using chatbots is high; therefore, if the chatbot is well integrated with the communication systems/channels of the MSMEs, the client receives an excellent, fast and efficient service.Originality/valueThe paper analyzes chatbots for customer service and presents the usability testing results of three chatbots implemented in MSMEs.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48045273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stratos Moschidis, Angelos Markos, Athanasios C. Thanopoulos
{"title":"“Automatic” interpretation of multiple correspondence analysis (MCA) results for nonexpert users, using R programming","authors":"Stratos Moschidis, Angelos Markos, Athanasios C. Thanopoulos","doi":"10.1108/aci-07-2022-0191","DOIUrl":"https://doi.org/10.1108/aci-07-2022-0191","url":null,"abstract":"PurposeThe purpose of this paper is to create an automatic interpretation of the results of the method of multiple correspondence analysis (MCA) for categorical variables, so that the nonexpert user can immediately and safely interpret the results, which concern, as the authors know, the categories of variables that strongly interact and determine the trends of the subject under investigation.Design/methodology/approachThis study is a novel theoretical approach to interpreting the results of the MCA method. The classical interpretation of MCA results is based on three indicators: the projection (F) of the category points of the variables in factorial axes, the point contribution to axis creation (CTR) and the correlation (COR) of a point with an axis. The synthetic use of the aforementioned indicators is arduous, particularly for nonexpert users, and frequently results in misinterpretations. The current study has achieved a synthesis of the aforementioned indicators, so that the interpretation of the results is based on a new indicator, as correspondingly on an index, the well-known method principal component analysis (PCA) for continuous variables is based.FindingsTwo (2) concepts were proposed in the new theoretical approach. The interpretative axis corresponding to the classical factorial axis and the interpretative plane corresponding to the factorial plane that as it will be seen offer clear and safe interpretative results in MCA.Research limitations/implicationsIt is obvious that in the development of the proposed automatic interpretation of the MCA results, the authors do not have in the interpretative axes the actual projections of the points as is the case in the original factorial axes, but this is not of interest to the simple user who is only interested in being able to distinguish the categories of variables that determine the interpretation of the most pronounced trends of the phenomenon being examined.Practical implicationsThe results of this research can have positive implications for the dissemination of MCA as a method and its use as an integrated exploratory data analysis approach.Originality/valueInterpreting the MCA results presents difficulties for the nonexpert user and sometimes lead to misinterpretations. The interpretative difficulty persists in the MCA's other interpretative proposals. The proposed method of interpreting the MCA results clearly and accurately allows for the interpretation of its results and thus contributes to the dissemination of the MCA as an integrated method of categorical data analysis and exploration.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49127983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An empirical study on the use of a facial emotion recognition system in guidance counseling utilizing the technology acceptance model and the general comfort questionnaire","authors":"Dhong Fhel K. Gom-os, Kelvin Y. Yong","doi":"10.1108/aci-06-2022-0154","DOIUrl":"https://doi.org/10.1108/aci-06-2022-0154","url":null,"abstract":"PurposeThe goal of this study is to test the real-world use of an emotion recognition system.Design/methodology/approachThe researchers chose an existing algorithm that displayed high accuracy and speed. Four emotions: happy, sadness, anger and surprise, are used from six of the universal emotions, associated by their own mood markers. The mood-matrix interface is then coded as a web application. Four guidance counselors and 10 students participated in the testing of the mood-matrix. Guidance counselors answered the technology acceptance model (TAM) to assess its usefulness, and the students answered the general comfort questionnaire (GCQ) to assess their comfort levels.FindingsResults from TAM found that the mood-matrix has significant use for the guidance counselors and the GCQ finds that the students were comfortable during testing.Originality/valueNo study yet has tested an emotion recognition system applied to counseling or any mental health or psychological transactions.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41447638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subject independent emotion recognition using EEG and physiological signals – a comparative study","authors":"Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy","doi":"10.1108/aci-03-2022-0080","DOIUrl":"https://doi.org/10.1108/aci-03-2022-0080","url":null,"abstract":"PurposeThe aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.Design/methodology/approachDEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.Findings The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.Originality/valueMany of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44210255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lina María Castro Benavides, Johnny Alexander Tamayo Arias, D. Burgos, A. Martens
{"title":"Measuring digital transformation in higher education institutions – content validity instrument","authors":"Lina María Castro Benavides, Johnny Alexander Tamayo Arias, D. Burgos, A. Martens","doi":"10.1108/aci-03-2022-0069","DOIUrl":"https://doi.org/10.1108/aci-03-2022-0069","url":null,"abstract":"PurposeThis study aims to validate the content of an instrument which identifies the organizational, sociocultural and technological characteristics that foster digital transformation (DT) in higher education institutions (HEIs) through the Delphi method.Design/methodology/approachThe methodology is quantitative, non-experimental, and descriptive in scope. First, expert judges were selected; Second, Aiken's V coefficients were obtained. Nine experts were considered for the validation.FindingsThis study’s findings show that the instrument has content validity and there was strong consensus among the judges. The instrument consists of 29 questions; 13 items adjusted and 2 merged.Originality/valueA novel instrument for measuring the DT at HEIs was designed and has content validity, evidenced by Aiken's V coefficients of 0.91 with a 0.05 significance, and consensus among judges evidenced by consensus coefficient of 0.81.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43097476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}