{"title":"Short-term photovoltaic power forecasting using hybrid contrastive learning and temporal convolutional network under future meteorological information absence","authors":"Xiaoyang Lu, Yandang Chen, Qibin Li, Pingping Yu","doi":"10.1111/coin.12606","DOIUrl":"10.1111/coin.12606","url":null,"abstract":"<p>Photovoltaic (PV) power generation is widely utilized to satisfy the increasing energy demand due to its cleanness and inexhaustibility. Accurate PV power forecasting can improve the penetration of PV power in the grid. However, it is pretty challenging to predict PV power in short-term under precious future meteorological information absence conditions. To address this problem, this study proposes the hybrid Contrastive Learning and Temporal Convolutional Network (CL-TCN), and this forecasting approach consists of two parts, including model training and adaptive processes of forecasting models. In the model training stage, this forecasting method firstly trains 18 TCN models for 18 time points from 9:00 a.m. to 17:30 p.m. These TCN models are trained by only using historical PV power data samples, and each model is used to predict the next half-hour power output. The adaptive process of models means that, in a practical forecasting stage, PV power samples from historical data are firstly evaluated and scored by a CL based data scoring mechanism to search for the most similar data samples to current measured samples. Then these similar samples are further applied to training a single above-mentioned well-trained TCN model to improve its performance in forecasting the next half-hour PV power. The experimental results tested at the time resolution of 30 min demonstrate that the proposed approach has superior performance in forecasting accuracy not only in smooth PV power samples but also in fluctuating PV power samples. Moreover, the proposed CL based data scoring mechanism can filter useless data samples effectively accelerating the forecasting process.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135316374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A progressive mesh simplification algorithm based on neural implicit representation","authors":"Yihua Chen","doi":"10.1111/coin.12605","DOIUrl":"10.1111/coin.12605","url":null,"abstract":"<p>Progressive mesh simplification (PM) algorithm aims to generate simplified mesh at any resolution for the input high-precision mesh, and only needs to be optimized or fitted once. Most of the existing PM algorithms are obtained based on heuristic mesh simplification algorithms, which leads to redundant storage space and poor practice-ability of the algorithm. In this article, a progressive mesh simplification algorithm based on neural implicit representation (NePM) is proposed, and NePM transforms algorithm process into an implicit continuous optimization problem through neural network and probabilistic model. NePM uses Gaussian mixture model to model high-precision mesh and samples the probabilistic model to obtain simplified meshes at different resolutions. In addition, the simplified mesh is optimized through multi-level neural network, preserving characteristics of the input high-precision mesh. Thus, the algorithm in this work lowers the memory usage of the PM and improves the practicability of the algorithm while ensuring the accuracy.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136014007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A localization method of manipulator towards achieving more precision control","authors":"Hongwei Gao, Hongyang Zhang, Yueqiu Jiang, Jian Sun, Jiahui Yu","doi":"10.1111/coin.12600","DOIUrl":"10.1111/coin.12600","url":null,"abstract":"<p>The monocular vision system is a crucial branch of machine vision research widely used in multiple industries as a research hotspot in the field of vision. Although the monocular vision system is of simple structure and cost-effectiveness, its positioning accuracy is insufficient in some industries. This article researched the robot arm positioning method via monocular vision. First, we built a vision system model and designed the style of a cooperative target for target positioning. Second, a target feature screening method based on conditions is composed for the existence of interference. Furthermore, combining the principle of pose estimation on the PNP (Perspective-n-Point) problem with the results of the visual system calibration to realize the positioning of the target. Finally, complete the construction of the experimental platform and design accuracy evaluation experiments and positioning experiments. The experimental results show that the location measurement error range of the system in this article is below 4 mm, and the measurement error of the rotation angle is below 2<math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mrow></mrow>\u0000 <mrow>\u0000 <mo>∘</mo>\u0000 </mrow>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {}^{circ } $$</annotation>\u0000 </semantics></math>. The system can adapt to the requirements of general industrial use.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135900101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Pan, Teng Li, Yu Liu, Quanli Pei, Ellen Anne Huang, Jimmy X. Huang
{"title":"A semantically enhanced text retrieval framework with abstractive summarization","authors":"Min Pan, Teng Li, Yu Liu, Quanli Pei, Ellen Anne Huang, Jimmy X. Huang","doi":"10.1111/coin.12603","DOIUrl":"10.1111/coin.12603","url":null,"abstract":"<p>Recently, large pretrained language models (PLMs) have led a revolution in the information retrieval community. In most PLMs-based retrieval frameworks, the ranking performance broadly depends on the model structure and the semantic complexity of the input text. Sequence-to-sequence generative models for question answering or text generation have proven to be competitive, so we wonder whether these models can improve ranking effectiveness by enhancing input semantics. This article introduces SE-BERT, a semantically enhanced bidirectional encoder representation from transformers (BERT) based ranking framework that captures more semantic information by modifying the input text. SE-BERT utilizes a pretrained generative language model to summarize both sides of the candidate passage and concatenate them into a new input sequence, allowing BERT to acquire more semantic information within the constraints of the input sequence's length. Experimental results from two Text Retrieval Conference datasets demonstrate that our approach's effectiveness increasing as the length of the input text increases.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135424975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiyang Yu, Dan Huang, Wenjie Li, Xianjie Wang, Xiaolong Shi
{"title":"Parallel accelerated computing architecture for dim target tracking on-board","authors":"Jiyang Yu, Dan Huang, Wenjie Li, Xianjie Wang, Xiaolong Shi","doi":"10.1111/coin.12604","DOIUrl":"10.1111/coin.12604","url":null,"abstract":"<p>The real-time tracking process of dim targets in space is mainly achieved through the correlation and prediction of dots after the detection and calculation process. The on-board calculation of the tracking needs to be completed in milliseconds, and it needs to reach the microsecond level at high frame rates. For real-time tracking of dim targets in space, it is necessary to achieve universal tracking calculation acceleration in response to different space regions and complex backgrounds, which poses high requirements for engineering implementation architecture. This paper designs a Kalman filter calculation based on digital logic parallel acceleration architecture for real-time solution of dim target tracking on-board. A unified architecture of Vector Processing Element (VPE) was established for the calculation of Kalman filtering matrix, and an array computing structure based on VPE was designed to decompose the entire filtering process and form a parallel pipelined data stream. The prediction errors under different fixed point bit widths were analyzed and deduced, and the guidance methods for selecting the optimal bit width based on the statistical results were provided. The entire design was engineered based on Xilinx's XC7K325T, resulting in an energy efficiency improvement compared to previous designs. The single iteration calculation time does not exceed 0.7 microseconds, which can meet the current high frame rate target tracking requirements. The effectiveness of this design has been verified through simulation of random trajectory data, which is consistent with the theoretical calculation error.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135926288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using LSTM neural networks for cross-lingual phonetic speech segmentation with an iterative correction procedure","authors":"Zdeněk Hanzlíček, Jindřich Matoušek, Jakub Vít","doi":"10.1111/coin.12602","DOIUrl":"10.1111/coin.12602","url":null,"abstract":"<p>This article describes experiments on speech segmentation using long short-term memory recurrent neural networks. The main part of the paper deals with multi-lingual and cross-lingual segmentation, that is, it is performed on a language different from the one on which the model was trained. The experimental data involves large Czech, English, German, and Russian speech corpora designated for speech synthesis. For optimal multi-lingual modeling, a compact phonetic alphabet was proposed by sharing and clustering phones of particular languages. Many experiments were performed exploring various experimental conditions and data combinations. We proposed a simple procedure that iteratively adapts the inaccurate default model to the new voice/language. The segmentation accuracy was evaluated by comparison with reference segmentation created by a well-tuned hidden Markov model-based framework with additional manual corrections. The resulting segmentation was also employed in a unit selection text-to-speech system. The generated speech quality was compared with the reference segmentation by a preference listening test.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12602","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Retina disease prediction using modified <scp>convolutional neural network</scp> based on <scp>Inception‐ResNet</scp> model with <scp>support vector machine</scp> classifier","authors":"Arushi Jain, Vishal Bhatnagar, Annavarapu Chandra Sekhara Rao, Manju Khari","doi":"10.1111/coin.12601","DOIUrl":"https://doi.org/10.1111/coin.12601","url":null,"abstract":"Abstract Artificial intelligence and deep learning have aided ocular disease through experiments including automatic illness recognition from images of the iris, fundus, or retina. Automated diagnosis systems (ADSs) provide services for the benefit of humanity and are essential in the early detection of harmful diseases. In fact, early detection is essential to avoid total blindness. In real life, several diagnostic tests such as visual ocular tonometry, retinal exam, and acuity test are performed, but they are conclusively time demanding and stressful for the patient. To consume time and detect the retinal disease earlier, an efficient prediction method is designed. In this proposed model, the first process is data collection that consists of a retinal disease dataset for testing and training. The second process is pre‐processing, which executes image resizing and noise filter for feature extraction. The third step is feature extraction, which extracts the image's form, size, color, and texture for classification with CNN based on Inception‐ResNet V2. The classification process is done by using the SVM with the extracted features. The prediction of diseases is classified such as normal, cataract, glaucoma, and retinal disease. The suggested model's performance is assessed using performance indicators such as accuracy, error, sensitivity, precision, and so forth. The suggested model's accuracy, error, sensitivity, and precision are 0.96, 0.962, 0.964, and 0.04, respectively, higher than existing techniques such as VGG16, Mobilenet V1, ResNet, and AlexNet. Thus, the proposed model instantly predicts retinal disease.","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136072330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel feature ranking algorithm for text classification: Brilliant probabilistic feature selector (BPFS)","authors":"Bekir Parlak","doi":"10.1111/coin.12599","DOIUrl":"https://doi.org/10.1111/coin.12599","url":null,"abstract":"Text classification (TC) is a very crucial task in this century of high‐volume text datasets. Feature selection (FS) is one of the most important stages in TC studies. In the literature, numerous feature selection methods are recommended for TC. In the TC domain, filter‐based FS methods are commonly utilized to select a more informative feature subsets. Each method uses a scoring system that is based on its algorithm to order the features. The classification process is then carried out by choosing the top‐N features. However, each method's feature order is distinct from the others. Each method selects by giving the qualities that are critical to its algorithm a high score, but it does not select by giving the features that are unimportant a low value. In this paper, we proposed a novel filter‐based FS method namely, brilliant probabilistic feature selector (BPFS), to assign a fair score and select informative features. While the BPFS method selects unique features, it also aims to select sparse features by assigning higher scores than common features. Extensive experimental studies using three effective classifiers decision tree (DT), support vector machines (SVM), and multinomial naive bayes (MNB) on four widely used datasets named Reuters‐21,578, 20Newsgroup, Enron1, and Polarity with different characteristics demonstrate the success of the BPFS method. For feature dimensions, 20, 50, 100, 200, 500, and 1000 dimensions were used. The experimental results on different benchmark datasets show that the BPFS method is more successful than the well‐known and recent FS methods according to Micro‐F1 and Macro‐F1 scores.","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50145528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chandrakirishnan Balakrishnan Sivaparthipan, Priyan Malarvizhi Kumar, Thota Chandu, BalaAnand Muthu, Mohammed Hasan Ali, Boris Tomaš
{"title":"Classification analysis of burnout people's brain images using ontology-based speculative sense model","authors":"Chandrakirishnan Balakrishnan Sivaparthipan, Priyan Malarvizhi Kumar, Thota Chandu, BalaAnand Muthu, Mohammed Hasan Ali, Boris Tomaš","doi":"10.1111/coin.12595","DOIUrl":"https://doi.org/10.1111/coin.12595","url":null,"abstract":"<p>Burnout is a state of exhaustion that results from prolonged, excessive workplace stress. This can be examined with the biological explications of burnout and physical consequences and classified against prolonged vigorous activities. The research aims to classify burnout people's brain images against prolonged emotional activities using ontology analysis of treatment and prevention and intermediate layers formation based on a speculative sense model. In this segment, the Ontology analysis of Treatment and prevention and intermediate layers formation based on a hypothetical sense model is employed for burnout people's classification analysis. The methodology is performed in the platform of ontology creation and performs the classification analysis. The calculation analysis found the result, and the brain images were classified. The classification analysis of burnout people's brain images, separation of prolonged vigorous activities, and the ontology creation for treatment and prevention against burnout people's brain images were obtained. The analysis received the result, and the results of the precision, recall, storage, computation time, specificity, and classification of burnout people's brain images were obtained. Furthermore, all these Ontology analysis of Treatment and prevention and intermediate layers formation based on a hypothetical sense model had the prediction sensitivity (SN) over 50% and specificity (SP) over 90%. The Classification of Burnout People's Brain performance comparison shows that the proposed system is much more successful than existing methods, especially on a scoring accuracy of 98%.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50122239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anindya Moitra, Nicholas O. Malott, Philip A. Wilsey
{"title":"Computation of persistent homology on streaming data using topological data summaries","authors":"Anindya Moitra, Nicholas O. Malott, Philip A. Wilsey","doi":"10.1111/coin.12597","DOIUrl":"https://doi.org/10.1111/coin.12597","url":null,"abstract":"<p>Persistent homology is a computationally intensive and yet extremely powerful tool for Topological Data Analysis. Applying the tool on potentially infinite sequence of data objects is a challenging task. For this reason, persistent homology and data stream mining have long been two important but disjoint areas of data science. The first computational model, that was recently introduced to bridge the gap between the two areas, is useful for detecting steady or gradual changes in data streams, such as certain genomic modifications during the evolution of species. However, that model is not suitable for applications that encounter abrupt changes of extremely short duration. This paper presents another model for computing persistent homology on streaming data that addresses the shortcoming of the previous work. The model is validated on the important real-world application of network anomaly detection. It is shown that in addition to detecting the occurrence of anomalies or attacks in computer networks, the proposed model is able to visually identify several types of traffic. Moreover, the model can accurately detect abrupt changes of extremely short as well as longer duration in the network traffic. These capabilities are not achievable by the previous model or by traditional data mining techniques.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50148646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}