Lelia Cristina Díaz-Pérez, Ana L. Quintanar-Reséndiz, Graciela Vázquez-Álvarez, R. Vázquez-Medina
{"title":"A review of cross-border cooperation regulation for digital forensics in LATAM from the soft systems methodology","authors":"Lelia Cristina Díaz-Pérez, Ana L. Quintanar-Reséndiz, Graciela Vázquez-Álvarez, R. Vázquez-Medina","doi":"10.1108/aci-01-2022-0010","DOIUrl":"https://doi.org/10.1108/aci-01-2022-0010","url":null,"abstract":"PurposeBased on this holistic model, the authors propose and analyze seven key issues related to the admissibility of digital media in cross-border trials considering four Latin American countries.Design/methodology/approachThe authors apply the modeling process of the soft systems methodology by Checkland in order to develop a holistic model focused on human situation problems involving digital media and information technology devices or systems.FindingsThe authors discuss the status of the identified key issues in each country and offer a perspective on the integration of cross-border work analyzing the contribution of these key issues to the collaboration between countries criminal cases or the use of foreign digital artifacts in domestic trials.Research limitations/implicationsIn this study, the authors assumed that the problems of official interaction between agencies of different countries are considered solved. However, for future studies or research, the authors recommend that these issues can be considered as relevant, since they are related to cross-border cooperation topics that will necessarily require unavoidable official arrangements, agreements and formalities.Practical implicationsThis work is aimed at defining and analyzing the key issues that can contribute to the application of current techniques and methodologies in digital forensics as a tool to support the legal framework of each country, considering cross-border trials. Finally, the authors highlight the implications of this study lie in the identification and analysis of the key issues that must be considered for digital forensics as a support tool for the admissibility of digital evidence in cross-border trials.Social implicationsThe authors consider that digital forensic will have high demand in cross-border trials, and it will depend on the people mobility between the countries considered in this study.Originality/valueThis paper shows that the soft systems methodology allows elaborating a holistic model focused on social problems involving digital media and informatics devices.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49174800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving handwritten digit recognition using hybrid feature selection algorithm","authors":"Fung Yuen Chin, K. Lem, Khye Mun Wong","doi":"10.1108/aci-02-2022-0054","DOIUrl":"https://doi.org/10.1108/aci-02-2022-0054","url":null,"abstract":"PurposeThe amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the employment of a feature selection algorithm becomes crucial for successful classification modeling, because the inclusion of irrelevant or redundant features can mislead the modeling algorithms, resulting in overfitting and decrease in efficiency.Design/methodology/approachThe minimum redundancy and maximum relevance (mRMR) and the recursive feature elimination (RFE) are two frequently used feature selection algorithms. While mRMR is capable of identifying a subset of features that are highly relevant to the targeted classification variable, mRMR still carries the weakness of capturing redundant features along with the algorithm. On the other hand, RFE is flawed by the fact that those features selected by RFE are not ranked by importance, albeit RFE can effectively eliminate the less important features and exclude redundant features.FindingsThe hybrid method was exemplified in a binary classification between digits “4” and “9” and between digits “6” and “8” from a multiple features dataset. The result showed that the hybrid mRMR + support vector machine recursive feature elimination (SVMRFE) is better than both the sole support vector machine (SVM) and mRMR.Originality/valueIn view of the respective strength and deficiency mRMR and RFE, this study combined both these methods and used an SVM as the underlying classifier anticipating the mRMR to make an excellent complement to the SVMRFE.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46024776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis and optimization of Dual Parallel Partition Sorting with OpenMP","authors":"Sirilak Ketchaya, A. Rattanatranurak","doi":"10.1108/aci-10-2021-0288","DOIUrl":"https://doi.org/10.1108/aci-10-2021-0288","url":null,"abstract":"PurposeSorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the data into subarrays and finally sorting them.Design/methodology/approachIn this paper, the algorithm named Dual Parallel Partition Sorting (DPPSort) is analyzed and optimized. It consists of a partitioning algorithm named Dual Parallel Partition (DPPartition). The DPPartition is analyzed and optimized in this paper and sorted with standard sorting functions named qsort and STLSort which are quicksort, and introsort algorithms, respectively. This algorithm is run on any shared memory/multicore systems. OpenMP library which supports multiprocessing programming is developed to be compatible with C/C++ standard library function. The authors’ algorithm recursively divides an unsorted array into two halves equally in parallel with Lomuto's partitioning and merge without compare-and-swap instructions. Then, qsort/STLSort is executed in parallel while the subarray is smaller than the sorting cutoff.FindingsIn the authors’ experiments, the 4-core Intel i7-6770 with Ubuntu Linux system is implemented. DPPSort is faster than qsort and STLSort up to 6.82× and 5.88× on Uint64 random distributions, respectively.Originality/valueThe authors can improve the performance of the parallel sorting algorithm by reducing the compare-and-swap instructions in the algorithm. This concept can be used to develop related problems to increase speedup of algorithms.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46022216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A literature review on users' behavioral intention toward chatbots' adoption","authors":"Paraskevi Gatzioufa, Vaggelis Saprikis","doi":"10.1108/aci-01-2022-0021","DOIUrl":"https://doi.org/10.1108/aci-01-2022-0021","url":null,"abstract":"PurposeDespite the fact that chatbots have been largely adopted for the last few years, a comprehensive literature review research focusing on the intention of individuals to adopt chatbots is rather scarce. In this respect, the present paper attempts a literature review investigation of empirical studies focused on the specific issue in nine scientific databases during 2017-2021. Specifically, it aims to classify extant empirical studies which focus on the context of individuals' adoption intention toward chatbots.Design/methodology/approachThe research is based on PRISMA methodology, which revealed a total of 39 empirical studies examining users' intention to adopt and utilize chatbots.FindingsAfter a thorough investigation, distinct categorization criteria emerged, such as research field, applied theoretical models, research types, methods and statistical measures, factors affecting intention to adopt and further use chatbots, the countries/continents where these surveys took place as well as relevant research citations and year of publication. In addition, the paper highlights research gaps in the examined issue and proposes future research directions in such a promising information technology solution.Originality/valueAs far as the authors are concerned, there has not been any other comprehensive literature review research to focus on examining previous empirical studies of users' intentions to adopt and use chatbots on the aforementioned period. According to the authors' knowledge, the present paper is the first attempt in the field which demonstrates broad literature review data of relevant empirical studies.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42009830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The construction of an accurate Arabic sentiment analysis system based on resources alteration and approaches comparison","authors":"Ibtissam Touahri","doi":"10.1108/aci-12-2021-0338","DOIUrl":"https://doi.org/10.1108/aci-12-2021-0338","url":null,"abstract":"PurposeThis paper purposed a multi-facet sentiment analysis system.Design/methodology/approachHence, This paper uses multidomain resources to build a sentiment analysis system. The manual lexicon based features that are extracted from the resources are fed into a machine learning classifier to compare their performance afterward. The manual lexicon is replaced with a custom BOW to deal with its time consuming construction. To help the system run faster and make the model interpretable, this will be performed by employing different existing and custom approaches such as term occurrence, information gain, principal component analysis, semantic clustering, and POS tagging filters.FindingsThe proposed system featured by lexicon extraction automation and characteristics size optimization proved its efficiency when applied to multidomain and benchmark datasets by reaching 93.59% accuracy which makes it competitive to the state-of-the-art systems.Originality/valueThe construction of a custom BOW. Optimizing features based on existing and custom feature selection and clustering approaches.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42984752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of ensemble recurrent model with stacked fuzzy ARTMAP for breast cancer detection","authors":"Abhishek Das, M. Mohanty","doi":"10.1108/aci-03-2022-0075","DOIUrl":"https://doi.org/10.1108/aci-03-2022-0075","url":null,"abstract":"PurposeIn time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.Design/methodology/approachIn this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.FindingsThe proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.Research limitations/implicationsResearch in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.Originality/valueThe proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62011471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anabel Gutierrez, José Aguilar, A. Ortega, E. Montoya
{"title":"Autonomous cycles of data analysis tasks for innovation processes in MSMEs","authors":"Anabel Gutierrez, José Aguilar, A. Ortega, E. Montoya","doi":"10.1108/aci-02-2022-0048","DOIUrl":"https://doi.org/10.1108/aci-02-2022-0048","url":null,"abstract":"PurposeThe authors propose the concept of “Autonomic Cycle for innovation processes,” which defines a set of tasks of data analysis, whose objective is to improve the innovation process in micro-, small and medium-sized enterprises (MSMEs).Design/methodology/approachThe authors design autonomic cycles where each data analysis task interacts with each other and has different roles: some of them must observe the innovation process, others must analyze and interpret what happens in it, and finally, others make decisions in order to improve the innovation process.FindingsIn this article, the authors identify three innovation sub-processes which can be applied to autonomic cycles, which allow interoperating the actors of innovation processes (data, people, things and services). These autonomic cycles define an innovation problem, specify innovation requirements, and finally, evaluate the results of the innovation process, respectively. Finally, the authors instance/apply the autonomic cycle of data analysis tasks to determine the innovation problem in the textile industry.Research limitations/implicationsIt is necessary to implement all autonomous cycles of data analysis tasks (ACODATs) in a real scenario to verify their functionalities. Also, it is important to determine the most important knowledge models required in the ACODAT for the definition of the innovation problem. Once determined this, it is necessary to define the relevant everything mining techniques required for their implementations, such as service and process mining tasks.Practical implicationsACODAT for the definition of the innovation problem is essential in a process innovation because it allows the organization to identify opportunities for improvement.Originality/valueThe main contributions of this work are: For an innovation process is specified its ACODATs in order to manage it. A multidimensional data model for the management of an innovation process is defined, which stores the required information of the organization and of the context. The ACODAT for the definition of the innovation problem is detailed and instanced in the textile industry. The Artificial Intelligence (AI) techniques required for the ACODAT for the innovation problem definition are specified, in order to obtain the knowledge models (prediction and diagnosis) for the management of the innovation process for MSMEs of the textile industry.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46525724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akhilesh S. Thyagaturu, G. Nguyen, B. Rimal, M. Reisslein
{"title":"Ubi-Flex-Cloud: ubiquitous flexible cloud computing: status quo and research imperatives","authors":"Akhilesh S. Thyagaturu, G. Nguyen, B. Rimal, M. Reisslein","doi":"10.1108/aci-02-2022-0029","DOIUrl":"https://doi.org/10.1108/aci-02-2022-0029","url":null,"abstract":"PurposeCloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.Design/methodology/approachThe authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.FindingsThe authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.Originality/valueThis review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47725975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Email classification analysis using machine learning techniques","authors":"Khalid Iqbal, Muhammad Shehrayar Khan","doi":"10.1108/aci-01-2022-0012","DOIUrl":"https://doi.org/10.1108/aci-01-2022-0012","url":null,"abstract":"PurposeIn this digital era, email is the most pervasive form of communication between people. Many users become a victim of spam emails and their data have been exposed.Design/methodology/approachResearchers contribute to solving this problem by a focus on advanced machine learning algorithms and improved models for detecting spam emails but there is still a gap in features. To achieve good results, features also play an important role. To evaluate the performance of applied classifiers, 10-fold cross-validation is used.FindingsThe results approve that the spam emails are correctly classified with the accuracy of 98.00% for the Support Vector Machine and 98.06% for the Artificial Neural Network as compared to other applied machine learning classifiers.Originality/valueIn this paper, Point-Biserial correlation is applied to each feature concerning the class label of the University of California Irvine (UCI) spambase email dataset to select the best features. Extensive experiments are conducted on selected features by training the different classifiers.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49628074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realizing the promise of big data: how Taiwan can help the world reduce medical errors and advance precision medicine","authors":"Kevin Wang, P. Muennig","doi":"10.1108/aci-11-2021-0298","DOIUrl":"https://doi.org/10.1108/aci-11-2021-0298","url":null,"abstract":"PurposeThe study explores how Taiwan’s electronic health data systems can be used to build algorithms that reduce or eliminate medical errors and to advance precision medicine.Design/methodology/approachThis study is a narrative review of the literature.FindingsThe body of medical knowledge has grown far too large for human clinicians to parse. In theory, electronic health records could augment clinical decision-making with electronic clinical decision support systems (CDSSs). However, computer scientists and clinicians have made remarkably little progress in building CDSSs, because health data tend to be siloed across many different systems that are not interoperable and cannot be linked using common identifiers. As a result, medicine in the USA is often practiced inconsistently with poor adherence to the best preventive and clinical practices. Poor information technology infrastructure contributes to medical errors and waste, resulting in suboptimal care and tens of thousands of premature deaths every year. Taiwan’s national health system, in contrast, is underpinned by a coordinated system of electronic data systems but remains underutilized. In this paper, the authors present a theoretical path toward developing artificial intelligence (AI)-driven CDSS systems using Taiwan’s National Health Insurance Research Database. Such a system could in theory not only optimize care and prevent clinical errors but also empower patients to track their progress in achieving their personal health goals.Originality/valueWhile research teams have previously built AI systems with limited applications, this study provides a framework for building global AI-based CDSS systems using one of the world’s few unified electronic health data systems.","PeriodicalId":37348,"journal":{"name":"Applied Computing and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41587739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}