{"title":"An Inquiry into the Evolutionary Game among Tripartite Entities and Strategy Selection within the Framework of Personal Information Authorization","authors":"Jie Tang, Zhiyi Peng, Wei Wei","doi":"10.3390/bdcc8080090","DOIUrl":"https://doi.org/10.3390/bdcc8080090","url":null,"abstract":"Mobile applications (Apps) serve as vital conduits for information exchange in the mobile internet era, yet they also engender significant cybersecurity risks due to their real-time handling of vast quantities of data. This manuscript constructs a tripartite evolutionary game model, “users-App providers-government”, to illuminate a pragmatic pathway for orderly information circulation within the App marketplace and sustainable industry development. It then scrutinizes the evolutionary process and emergence conditions of their stabilizing equilibrium strategies and employs simulation analysis via MATLAB. The findings reveal that (1) there exists a high degree of coupling among the strategic selections of the three parties, wherein any alteration in one actor’s decision-making trajectory exerts an impact on the evolutionary course of the remaining two actors. (2) The initial strategies significantly influence the pace of evolutionary progression and its outcome. Broadly speaking, the higher the initial probabilities of users opting for information authorization, App providers adopting compliant data solicitation practices, and the government enforcing stringent oversight, the more facile the attainment of an evolutionarily optimal solution. (3) The strategic preferences of the triadic stakeholders are subject to a composite influence of respective costs, benefits, and losses. Of these, users’ perceived benefits serve as the impetus for their strategic decisions, while privacy concerns act as a deterrent. App providers’ strategy decisions are influenced by a number of important elements, including their corporate reputation and fines levied by the government. Costs associated with government regulations are the main barrier to the adoption of strict supervision practices. Drawing upon these analytical outcomes, we posit several feasible strategies.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141926301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative Artificial Intelligence: Analyzing Its Future Applications in Additive Manufacturing","authors":"Erik Westphal, Hermann Seitz","doi":"10.3390/bdcc8070074","DOIUrl":"https://doi.org/10.3390/bdcc8070074","url":null,"abstract":"New developments in the field of artificial intelligence (AI) are increasingly finding their way into industrial areas such as additive manufacturing (AM). Generative AI (GAI) applications in particular offer interesting possibilities here, for example, to generate texts, images or computer codes with the help of algorithms and to integrate these as useful supports in various AM processes. This paper examines the opportunities that GAI offers specifically for additive manufacturing. There are currently relatively few publications that deal with the topic of GAI in AM. Much of the information has only been published in preprints. There, the focus has been on algorithms for Natural Language Processing (NLP), Large Language Models (LLMs) and generative adversarial networks (GANs). This summarised presentation of the state of the art of GAI in AM is new and the link to specific use cases is this first comprehensive case study on GAI in AM processes. Building on this, three specific use cases are then developed in which generative AI tools are used to optimise AM processes. Finally, a Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis is carried out on the general possibilities of GAI, which forms the basis for an in-depth discussion on the sensible use of GAI tools in AM. The key findings of this work are that GAI can be integrated into AM processes as a useful support, making these processes faster and more creative, as well as to make the process information digitally recordable and usable. This current and future potential, as well as the technical implementation of GAI into AM, is also presented and explained visually. It is also shown where the use of generative AI tools can be useful and where current or future potential risks may arise.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141836903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Insights into Industrial Efficiency: An Empirical Study of Blockchain Technology","authors":"K. Douaioui, Othmane Benmoussa","doi":"10.3390/bdcc8060062","DOIUrl":"https://doi.org/10.3390/bdcc8060062","url":null,"abstract":"Blockchain technology is expected to have a radical impact on most industries by boosting security, transparency, and efficiency. This work considers the potential benefits of blockchain-focused applications in industrial process monitoring. The research design facilitates a detailed bibliometric analysis and delivers insights into the intellectual structure of blockchain technology’s application in industry via scientometric approaches. The work also approaches numerous sources in various industrial sectors to identify the transformative role of blockchain in industrial processes. Aspects such as blockchain technology’s impact on industrial processes’ transparency are discussed, while the paper does not ignore that success stories in applying blockchain to industrial sectors are often exaggerated due to a highly competitive environment that the cryptocurrency domain has become. Finally, the work presents major research avenues and decision-making areas that should be tackled to maximize the disruptive potential of blockchain and create a secure, transparent, and inclusive future.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing Trends in Digital Transformation Korean Social Media Data: A Semantic Network Analysis","authors":"Jong-Hwi Song, Byung-Suk Seo","doi":"10.3390/bdcc8060061","DOIUrl":"https://doi.org/10.3390/bdcc8060061","url":null,"abstract":"This study explores the impact of digital transformation on Korean society by analyzing Korean social media data, focusing on the societal and economic effects triggered by advancements in digital technology. Utilizing text mining techniques and semantic network analysis, we extracted key terms and their relationships from online news and blogs, identifying major themes related to digital transformation. Our analysis, based on data collected from major Korean portals using various related search terms, provides deep insights into how digital evolution influences individuals, businesses, and government sectors. The findings offer a comprehensive view of the technological and social trends emerging from digital transformation, including its policy, economic, and educational implications. This research not only sheds light on the understanding and strategic approaches to digital transformation in Korea but also demonstrates the potential of social media data in analyzing the societal impact of technological advancements, offering valuable resources for future research in effectively navigating the era of digital change.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Self-Supervised Learning through Explainable Artificial Intelligence Mechanisms: A Computational Analysis","authors":"Elie Neghawi, Yan Liu","doi":"10.3390/bdcc8060058","DOIUrl":"https://doi.org/10.3390/bdcc8060058","url":null,"abstract":"Self-supervised learning continues to drive advancements in machine learning. However, the absence of unified computational processes for benchmarking and evaluation remains a challenge. This study conducts a comprehensive analysis of state-of-the-art self-supervised learning algorithms, emphasizing their underlying mechanisms and computational intricacies. Building upon this analysis, we introduce a unified model-agnostic computation (UMAC) process, tailored to complement modern self-supervised learning algorithms. UMAC serves as a model-agnostic and global explainable artificial intelligence (XAI) methodology that is capable of systematically integrating and enhancing state-of-the-art algorithms. Through UMAC, we identify key computational mechanisms and craft a unified framework for self-supervised learning evaluation. Leveraging UMAC, we integrate an XAI methodology to enhance transparency and interpretability. Our systematic approach yields a 17.12% increase in improvement in training time complexity and a 13.1% boost in improvement in testing time complexity. Notably, improvements are observed in augmentation, encoder architecture, and auxiliary components within the network classifier. These findings underscore the importance of structured computational processes in enhancing model efficiency and fortifying algorithmic transparency in self-supervised learning, paving the way for more interpretable and efficient AI models.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141269231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudio Gutiérrez-Soto, Patricio Galdames, Marco A. Palomino
{"title":"An Efficient Probabilistic Algorithm to Detect Periodic Patterns in Spatio-Temporal Datasets","authors":"Claudio Gutiérrez-Soto, Patricio Galdames, Marco A. Palomino","doi":"10.3390/bdcc8060059","DOIUrl":"https://doi.org/10.3390/bdcc8060059","url":null,"abstract":"Deriving insight from data is a challenging task for researchers and practitioners, especially when working on spatio-temporal domains. If pattern searching is involved, the complications introduced by temporal data dimensions create additional obstacles, as traditional data mining techniques are insufficient to address spatio-temporal databases (STDBs). We hereby present a new algorithm, which we refer to as F1/FP, and can be described as a probabilistic version of the Minus-F1 algorithm to look for periodic patterns. To the best of our knowledge, no previous work has compared the most cited algorithms in the literature to look for periodic patterns—namely, Apriori, MS-Apriori, FP-Growth, Max-Subpattern, and PPA. Thus, we have carried out such comparisons and then evaluated our algorithm empirically using two datasets, showcasing its ability to handle different types of periodicity and data distributions. By conducting such a comprehensive comparative analysis, we have demonstrated that our newly proposed algorithm has a smaller complexity than the existing alternatives and speeds up the performance regardless of the size of the dataset. We expect our work to contribute greatly to the mining of astronomical data and the permanently growing online streams derived from social media.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141272498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-Based Leaf Disease Recognition Using Transfer Deep Learning with a Novel Versatile Optimization Module","authors":"Petra Radočaj, Dorijan Radočaj, Goran Martinović","doi":"10.3390/bdcc8060052","DOIUrl":"https://doi.org/10.3390/bdcc8060052","url":null,"abstract":"Due to the projected increase in food production by 70% in 2050, crops should be additionally protected from diseases and pests to ensure a sufficient food supply. Transfer deep learning approaches provide a more efficient solution than traditional methods, which are labor-intensive and struggle to effectively monitor large areas, leading to delayed disease detection. This study proposed a versatile module based on the Inception module, Mish activation function, and Batch normalization (IncMB) as a part of deep neural networks. A convolutional neural network (CNN) with transfer learning was used as the base for evaluated approaches for tomato disease detection: (1) CNNs, (2) CNNs with a support vector machine (SVM), and (3) CNNs with the proposed IncMB module. In the experiment, the public dataset PlantVillage was used, containing images of six different tomato leaf diseases. The best results were achieved by the pre-trained InceptionV3 network, which contains an IncMB module with an accuracy of 97.78%. In three out of four cases, the highest accuracy was achieved by networks containing the proposed IncMB module in comparison to evaluated CNNs. The proposed IncMB module represented an improvement in the early detection of plant diseases, providing a basis for timely leaf disease detection.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nurmaganbet Smatov, Ruslan Kalashnikov, Amandyk Kartbayev
{"title":"Development of Context-Based Sentiment Classification for Intelligent Stock Market Prediction","authors":"Nurmaganbet Smatov, Ruslan Kalashnikov, Amandyk Kartbayev","doi":"10.3390/bdcc8060051","DOIUrl":"https://doi.org/10.3390/bdcc8060051","url":null,"abstract":"This paper presents a novel approach to sentiment analysis specifically customized for predicting stock market movements, bypassing the need for external dictionaries that are often unavailable for many languages. Our methodology directly analyzes textual data, with a particular focus on context-specific sentiment words within neural network models. This specificity ensures that our sentiment analysis is both relevant and accurate in identifying trends in the stock market. We employ sophisticated mathematical modeling techniques to enhance both the precision and interpretability of our models. Through meticulous data handling and advanced machine learning methods, we leverage large datasets from Twitter and financial markets to examine the impact of social media sentiment on financial trends. We achieved an accuracy exceeding 75%, highlighting the effectiveness of our modeling approach, which we further refined into a convolutional neural network model. This achievement contributes valuable insights into sentiment analysis within the financial domain, thereby improving the overall clarity of forecasting in this field.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141111099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marwa Salah Farhan, Amira Youssef, Laila Abdelhamid
{"title":"A Model for Enhancing Unstructured Big Data Warehouse Execution Time","authors":"Marwa Salah Farhan, Amira Youssef, Laila Abdelhamid","doi":"10.3390/bdcc8020017","DOIUrl":"https://doi.org/10.3390/bdcc8020017","url":null,"abstract":"Traditional data warehouses (DWs) have played a key role in business intelligence and decision support systems. However, the rapid growth of the data generated by the current applications requires new data warehousing systems. In big data, it is important to adapt the existing warehouse systems to overcome new issues and limitations. The main drawbacks of traditional Extract–Transform–Load (ETL) are that a huge amount of data cannot be processed over ETL and that the execution time is very high when the data are unstructured. This paper focuses on a new model consisting of four layers: Extract–Clean–Load–Transform (ECLT), designed for processing unstructured big data, with specific emphasis on text. The model aims to reduce execution time through experimental procedures. ECLT is applied and tested using Spark, which is a framework employed in Python. Finally, this paper compares the execution time of ECLT with different models by applying two datasets. Experimental results showed that for a data size of 1 TB, the execution time of ECLT is 41.8 s. When the data size increases to 1 million articles, the execution time is 119.6 s. These findings demonstrate that ECLT outperforms ETL, ELT, DELT, ELTL, and ELTA in terms of execution time.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139800315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marwa Salah Farhan, Amira Youssef, Laila Abdelhamid
{"title":"A Model for Enhancing Unstructured Big Data Warehouse Execution Time","authors":"Marwa Salah Farhan, Amira Youssef, Laila Abdelhamid","doi":"10.3390/bdcc8020017","DOIUrl":"https://doi.org/10.3390/bdcc8020017","url":null,"abstract":"Traditional data warehouses (DWs) have played a key role in business intelligence and decision support systems. However, the rapid growth of the data generated by the current applications requires new data warehousing systems. In big data, it is important to adapt the existing warehouse systems to overcome new issues and limitations. The main drawbacks of traditional Extract–Transform–Load (ETL) are that a huge amount of data cannot be processed over ETL and that the execution time is very high when the data are unstructured. This paper focuses on a new model consisting of four layers: Extract–Clean–Load–Transform (ECLT), designed for processing unstructured big data, with specific emphasis on text. The model aims to reduce execution time through experimental procedures. ECLT is applied and tested using Spark, which is a framework employed in Python. Finally, this paper compares the execution time of ECLT with different models by applying two datasets. Experimental results showed that for a data size of 1 TB, the execution time of ECLT is 41.8 s. When the data size increases to 1 million articles, the execution time is 119.6 s. These findings demonstrate that ECLT outperforms ETL, ELT, DELT, ELTL, and ELTA in terms of execution time.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139860304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}