Advances in computational intelligence最新文献

筛选
英文 中文
Solutions of Yang Baxter equation of symplectic Jordan superalgebras 辛Jordan超代数的Yang-Baxter方程的解
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00017-5
Amir Baklouti, Warda Bensalah, Khaled Al-Motairi
{"title":"Solutions of Yang Baxter equation of symplectic Jordan superalgebras","authors":"Amir Baklouti,&nbsp;Warda Bensalah,&nbsp;Khaled Al-Motairi","doi":"10.1007/s43674-021-00017-5","DOIUrl":"10.1007/s43674-021-00017-5","url":null,"abstract":"<div><p>We establish in this paper the equivalence between the existence of a solution of the Yang Baxter equation of a Jordan superalgebras and that of symplectic form on Jordan superalgebras.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks 卷积神经网络中获得尺度和方向不变分类的特征向量方法
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00023-7
Swetha Velluva Chathoth, Asish Kumar Mishra, Deepak Mishra, Subrahmanyam Gorthi R. K. Sai
{"title":"An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks","authors":"Swetha Velluva Chathoth,&nbsp;Asish Kumar Mishra,&nbsp;Deepak Mishra,&nbsp;Subrahmanyam Gorthi R. K. Sai","doi":"10.1007/s43674-021-00023-7","DOIUrl":"10.1007/s43674-021-00023-7","url":null,"abstract":"<div><p>The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is proposed for incorporating rotation and scale in-variances in CNN-based classifications, based on eigenvectors and eigenvalues of the image covariance matrix. Without demanding any training data augmentation or CNN architectural change, the proposed method, <b>‘Scale and Orientation Corrected Networks (SOCN)’</b>, achieves better rotation and scale-invariant performances. <b>SOCN</b> proposes a scale and orientation correction step for images before baseline CNN training and testing. Being a generalized approach, <b>SOCN</b> can be combined with any baseline CNN to improve its rotational and scale in-variance performances. We demonstrate the proposed approach’s scale and orientation invariant classification ability with several real cases ranging from scale and orientation invariant character recognition to orientation invariant image classification, with different suitable baseline architectures. The proposed approach of <b>SOCN</b>, though is simple, outperforms the current state of the art scale and orientation invariant classifiers comparatively with minimal training and testing time.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00023-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
BCK codes BCK代码
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00018-4
Hashem Bordbar
{"title":"BCK codes","authors":"Hashem Bordbar","doi":"10.1007/s43674-021-00018-4","DOIUrl":"10.1007/s43674-021-00018-4","url":null,"abstract":"<div><p>In this paper, we initiate the study of the notion of the <i>BCK</i>-function on an arbitrary set <i>A</i>, and providing connections with <i>x</i>-functions and <i>x</i>-subsets for <span>(x in X)</span> where <i>X</i> is a <i>BCK</i>-algebra. Moreover, using the notion of order in a <i>BCK</i>-algebra, the <i>BCK</i>-code <i>C</i> is introduced and besides a new structure of order in <i>C</i> is investigated. Finally, we show that the structure of the <i>BCK</i>-algebra <i>X</i> and the <i>BCK</i>-code <i>C</i> which is generated by <i>X</i>, with their related orders are the same.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00018-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Caristi type mappings and characterization of completeness of Archimedean type fuzzy metric spaces Caristi型映射与阿基米德型模糊度量空间完备性的刻画
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00014-8
J. Martínez-Moreno, D. Gopal, Vladimir Rakočević, A. S. Ranadive, R. P. Pant
{"title":"Caristi type mappings and characterization of completeness of Archimedean type fuzzy metric spaces","authors":"J. Martínez-Moreno,&nbsp;D. Gopal,&nbsp;Vladimir Rakočević,&nbsp;A. S. Ranadive,&nbsp;R. P. Pant","doi":"10.1007/s43674-021-00014-8","DOIUrl":"10.1007/s43674-021-00014-8","url":null,"abstract":"<div><p>This paper deals with some issues of fixed point concerning Caristi type mappings introduced by Abbasi and Golshan (Kybernetika 52:929–942, 2016) in fuzzy metric spaces. We enlarge this class of mappings and prove completeness characterization of corresponding fuzzy metric space. The paper includes a comprehensive set of examples showing the generality of our results and an open question.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Feature selection based on min-redundancy and max-consistency 基于最小冗余和最大一致性的特征选择
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00021-9
Yanting Guo, Meng Hu, Eric C. C. Tsang, Degang Chen, Weihua Xu
{"title":"Feature selection based on min-redundancy and max-consistency","authors":"Yanting Guo,&nbsp;Meng Hu,&nbsp;Eric C. C. Tsang,&nbsp;Degang Chen,&nbsp;Weihua Xu","doi":"10.1007/s43674-021-00021-9","DOIUrl":"10.1007/s43674-021-00021-9","url":null,"abstract":"<div><p>Feature selection can effectively eliminate irrelevant or redundant features without changing features semantics, so as to improve the performance of learning and reduce the training time. In most of the existing feature selection methods based on rough sets, eliminating the redundant features between features and decisions, and deleting the redundant features between features are performed separately. This will greatly increase the search time of feature subset. To quickly remove redundant features, we define a series of feature evaluation functions that consider both the consistency between features and decisions, and redundancy between features, then propose a novel feature selection method based on min-redundancy and max-consistency. Firstly, we define the consistency of features with respect to decisions and the redundancy between features from neighborhood information granules. Then we propose a combined criterion to measure the importance of features and design a feature selection algorithm based on minimal-redundancy-maximal-consistency (mRMC). Finally, on UCI data sets, mRMC is compared with three other popular feature selection algorithms based on neighborhood idea, from classification accuracy, the number of selected features and running time. The experimental comparison shows that mRMC can quickly delete redundant features and select useful features while ensuring classification accuracy.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00021-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A hybrid monotone decision tree model for interval-valued attributes 区间值属性的混合单调决策树模型
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00016-6
Jiankai Chen, Zhongyan Li, Xin Wang, Junhai Zhai
{"title":"A hybrid monotone decision tree model for interval-valued attributes","authors":"Jiankai Chen,&nbsp;Zhongyan Li,&nbsp;Xin Wang,&nbsp;Junhai Zhai","doi":"10.1007/s43674-021-00016-6","DOIUrl":"10.1007/s43674-021-00016-6","url":null,"abstract":"<div><p>The existing monotonic decision tree algorithms are based on a linearly ordered constraint that certain attributes are monotonously consistent with the decision, which could be called monotonic attributes, whereas others, called non-monotonic attributes. In practice, monotonic and non-monotonic attributes coexist in most classification tasks, and some attribute values are even evaluated as interval numbers. In this paper, we proposed a fuzzy rank-inconsistent rate based on probability degree to judge the monotonicity of interval numbers. Furthermore, we devised a hybrid model composed of monotonic and non-monotonic attributes to construct a mixed monotone decision tree for interval-valued data. Experiments on artificial and real-world data sets show that the proposed hybrid model is effective.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00016-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Toward durable representations for continual learning 实现持续学习的持久表征
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00022-8
Alaa El Khatib, Fakhri Karray
{"title":"Toward durable representations for continual learning","authors":"Alaa El Khatib,&nbsp;Fakhri Karray","doi":"10.1007/s43674-021-00022-8","DOIUrl":"10.1007/s43674-021-00022-8","url":null,"abstract":"<div><p>Continual learning models are known to suffer from <i>catastrophic forgetting</i>. Existing regularization methods to countering forgetting operate by penalizing large changes to learned parameters. A significant downside to these methods, however, is that, by effectively freezing model parameters, they gradually suspend the capacity of a model to learn new tasks. In this paper, we explore an alternative approach to the continual learning problem that aims to circumvent this downside. In particular, we ask the question: instead of forcing continual learning models to remember the past, can we modify the learning process from the start, such that the learned representations are less susceptible to forgetting? To this end, we explore multiple methods that could potentially encourage durable representations. We demonstrate empirically that the use of unsupervised auxiliary tasks achieves significant reduction in parameter re-optimization across tasks, and consequently reduces forgetting, without explicitly penalizing forgetting. Moreover, we propose a distance metric to track internal model dynamics across tasks, and use it to gain insight into the workings of our proposed approach, as well as other recently proposed methods.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00022-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social influence-based personal latent factors learning for effective recommendation 基于社会影响的个人潜在因素学习有效推荐
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00019-3
Yunhe Wei, Huifang Ma, Ruoyi Zhang
{"title":"Social influence-based personal latent factors learning for effective recommendation","authors":"Yunhe Wei,&nbsp;Huifang Ma,&nbsp;Ruoyi Zhang","doi":"10.1007/s43674-021-00019-3","DOIUrl":"10.1007/s43674-021-00019-3","url":null,"abstract":"<div><p>Social recommendation has become an important technique of various online commerce platforms, which aims to predict the user preference based on the social network and the interactive network. Social recommendation, which can naturally integrate social information and interactive structure, has been demonstrated to be powerful in solving data sparsity and cold-start problems. Although some of the existing methods have been proven effective, the following two insights are often neglected. First, except for the explicit connections, social information contains implicit connections, e.g., indirect social relations. Indirect social relations can effectively improve the quality of recommendation when users only have few direct social relations. Second, the strength of social influence between users is different. In other words, users have different degrees of trust in different friends. These insights motivate us to propose a novel social recommendation model SIER (short for Social Influence-based Effective Recommendation) in this paper, which incorporates interactive information and social information into personal latent factors learning for social influence-based recommendation. Specifically, user preferences are captured in behavior history and social relations, i.e., user latent factors are shared in interactive network and social network. In particular, we utilize an overlapping community detection method to sufficiently capture the implicit relations in the social network. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed method.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00019-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Missing label imputation through inception-based semi-supervised ensemble learning 基于初始阶段的半监督集成学习缺失标签插补
Advances in computational intelligence Pub Date : 2021-12-17 DOI: 10.1007/s43674-021-00015-7
Hufsa Khan, Han Liu, Chao Liu
{"title":"Missing label imputation through inception-based semi-supervised ensemble learning","authors":"Hufsa Khan,&nbsp;Han Liu,&nbsp;Chao Liu","doi":"10.1007/s43674-021-00015-7","DOIUrl":"10.1007/s43674-021-00015-7","url":null,"abstract":"<div><p>In classification tasks, unlabeled data bring the uncertainty in the learning process, which may result in the degradation of the performance. In this paper, we propose a novel semi-supervised inception neural network ensemble-based architecture to achieve missing label imputation. The main idea of the proposed architecture is to use smaller ensembles within a larger ensemble to involve diverse ways of missing label imputation and internal transformation of feature representation, towards enhancing the prediction accuracy. Following the process of imputing the missing labels of unlabeled data, the human-labeled data and the data with imputed labels are used together as a training set for the credible classifiers learning. Meanwhile, we discuss how this proposed approach is more effective as compared to the traditional ensemble learning approaches. Our proposed approach is evaluated on different well-known benchmark data sets, and the experimental results show the effectiveness of the proposed method. In addition, the approach is validated by statistical analysis using Wilcoxon signed rank test and the results indicate statistical significance of the performance improvement in comparison with other methods.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00015-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Maximizing bi-mutual information of features for self-supervised deep clustering 最大化自监督深度聚类特征的双向信息
Advances in computational intelligence Pub Date : 2021-12-16 DOI: 10.1007/s43674-021-00012-w
Jiacheng Zhao, Junfen Chen, Xiangjie Meng, Junhai Zhai
{"title":"Maximizing bi-mutual information of features for self-supervised deep clustering","authors":"Jiacheng Zhao,&nbsp;Junfen Chen,&nbsp;Xiangjie Meng,&nbsp;Junhai Zhai","doi":"10.1007/s43674-021-00012-w","DOIUrl":"10.1007/s43674-021-00012-w","url":null,"abstract":"<div><p>Self-supervised learning based on mutual information makes good use of classification models and label information produced by clustering tasks to train networks parameters, and then updates the downstream clustering assignment with respect to maximizing mutual information between label information. This kind of methods have attracted more and more attention and obtained better progress, but there is still a larger improvement space compared with the methods of supervised learning, especially on the challenge image datasets. To this end, a self-supervised deep clustering method by maximizing mutual information is proposed (bi-MIM-SSC), where deep convolutional network is employed as a feature encoder. The first term is to maximize mutual information between output-feature pairs for importing more semantic meaning to the output features. The second term is to maximize mutual information between an input image and its feature generated by the encoder for keeping the useful information of an original image in latent space as possible. Furthermore, pre-training is carried out to further enhance the representation ability of the encoder, and the auxiliary over-clustering is added in clustering network. The performance of the proposed method bi-MIM-SSC is compared with other clustering methods on the CIFAR10, CIFAR100 and STL10 datasets. Experimental results demonstrate that the proposed bi-MIM-SSC method has better feature representation ability and provide better clustering results.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50486324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信