Mila Dalla Preda , Claudia Greco , Michele Ianni , Francesco Lupia , Andrea Pugliese
{"title":"Light sensor based covert channels on mobile devices","authors":"Mila Dalla Preda , Claudia Greco , Michele Ianni , Francesco Lupia , Andrea Pugliese","doi":"10.1016/j.ins.2024.121581","DOIUrl":"10.1016/j.ins.2024.121581","url":null,"abstract":"<div><div>The widespread adoption of light sensors in mobile devices has enabled functionalities that range from automatic brightness control to environmental monitoring. However, these sensors also present significant security and privacy risks within the Android ecosystem due to unrestricted access permissions. This paper explores how light sensor data can be used for covert communication through a novel, light-based out-of-band channel. We develop two approaches–<span>Baseline</span> and <span>ResetBased</span>–that use luminance values to encode and decode data. These methods tackle challenges that arise from data variability and the unpredictability of sensor event timings. To enhance data transmission accuracy, our methods employ a novel strategy for selecting luminance reference sequences and leverage mean-squared-error-based distance for decoding. Experimental results validate the effectiveness of our approaches and their potential for real-world applications.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121581"},"PeriodicalIF":8.1,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Lv , Jiesi Luo , Yonglin Zhang , Hui Guo , Ming Yang , Menglong Li , Qi Chen , Runyu Jing
{"title":"Unveiling diagnostic information for type 2 diabetes through interpretable machine learning","authors":"Xiang Lv , Jiesi Luo , Yonglin Zhang , Hui Guo , Ming Yang , Menglong Li , Qi Chen , Runyu Jing","doi":"10.1016/j.ins.2024.121582","DOIUrl":"10.1016/j.ins.2024.121582","url":null,"abstract":"<div><div>The interpretability of disease prediction models is often crucial for their trustworthiness and usability among medical practitioners. Existing methods in interpretable artificial intelligence improve model transparency but fall short in identifying precise, disease-specific primal information. In this work, an interpretable deep learning-based algorithm called the data space landmark refiner was developed, which not only enhances both global interpretability and local interpretability but also reveals the intrinsic information of the data distribution. Using the proposed method, a type 2 diabetes mellitus diagnostic model with high interpretability was constructed on the basis of the electronic health records from two hospitals. Moreover, effective diagnostic information was directly derived from the model’s internal parameters, demonstrating strong alignment with current clinical knowledge. Compared with conventional interpretable machine learning approaches, the proposed method offered more precise and specific interpretability, increasing clinical practitioners’ trust in machine learning-supported diagnostic models.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121582"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resampling approach for imbalanced data classification based on class instance density per feature value intervals","authors":"Fei Wang , Ming Zheng , Kai Ma , Xiaowen Hu","doi":"10.1016/j.ins.2024.121570","DOIUrl":"10.1016/j.ins.2024.121570","url":null,"abstract":"<div><div>In practical applications, imbalanced datasets significantly degrade the classification performance of machine learning models. However, most conventional resampling approaches fall short in adequately addressing the varying contributions of individual features to the classification model. In response to this defect, this study introduces three novel resampling approaches. The first approach, Oversampling based on class instance density per feature value intervals (OCF), focuses on augmenting the dataset. The second approach, Undersampling based on class instance density per feature value intervals (UCF), seeks to reduce dataset size. The third approach, Hybrid sampling based on class instance density per feature value intervals (HSCF), which can perform oversampling and undersampling simultaneously. These approaches categorize feature value into different intervals based on their varying information content, calculate class instance densities within these intervals, and generate feature values in intervals with high discriminative information. Subsequently, these generated features are combined to synthesize minority class data, effectively achieving oversampling. Additionally, the study combines class instance density and feature importance to identify majority class data at the classification boundary with minimal contribution and subsequently executes undersampling. The flexibility to adjust sampling ratios and the integration of OCF and UCF enable the implementation of hybrid sampling. Finally, experiments on the benchmark dataset demonstrate the superiority and effectiveness of the proposed method. Furthermore, it is observed that the method proposed in this study enhances the feature dividing capability of decision tree classifiers. Hence, the best results are achieved when working in synergy with decision tree classifiers, leading to the most significant improvements in classification performance. All codes have been published at <span><span>https://github.com/Wangfeiopen/HS</span><svg><path></path></svg></span><span><span>CF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"692 ","pages":"Article 121570"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discounted fully probabilistic design of decision rules","authors":"Miroslav Kárný, Soňa Molnárová","doi":"10.1016/j.ins.2024.121578","DOIUrl":"10.1016/j.ins.2024.121578","url":null,"abstract":"<div><div>Axiomatic fully probabilistic design (FPD) of optimal decision rules strictly extends the decision making (DM) theory represented by Markov decision processes (MDP). This means that any MDP task can be approximated by an explicitly found FPD task whereas many FPD tasks have no MDP equivalent. MDP and FPD model the closed loop — the coupling of an agent and its environment — via a joint probability density (pd) relating the involved random variables, referred to as behaviour. Unlike MDP, FPD quantifies agent's aims and constraints by an <em>ideal pd</em>. The ideal pd is high on the desired behaviours, small on undesired behaviours and zero on forbidden ones. FPD selects the optimal decision rules as the minimiser of Kullback-Leibler's divergence of the closed-loop-modelling pd to its ideal twin. The proximity measure choice follows from the FPD axiomatics.</div><div>MDP minimises the expected total loss, which is usually the sum of discounted partial losses. The discounting reflects the decreasing importance of future losses. It also diminishes the influence of errors caused by:</div><div><figure><img></figure> the imperfection of the employed environment model;</div><div><figure><img></figure> roughly-expressed aims;</div><div><figure><img></figure> the approximate learning and decision-rules design.</div><div>The established FPD cannot currently account for these important features. The paper elaborates the missing discounted version of FPD. This non-trivial filling of the gap in FPD also employs an extension of dynamic programming, which is of an independent interest.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121578"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kezhu Zuo , Xinde Li , Le Yu , Tao Shen , Yilin Dong , Jean Dezert
{"title":"Evidence combination with multi-granularity belief structure for pattern classification","authors":"Kezhu Zuo , Xinde Li , Le Yu , Tao Shen , Yilin Dong , Jean Dezert","doi":"10.1016/j.ins.2024.121577","DOIUrl":"10.1016/j.ins.2024.121577","url":null,"abstract":"<div><div>Belief function (BF) theory provides a framework for effective modeling, quantifying uncertainty, and combining evidence, rendering it a potent tool for tackling uncertain decision-making problems. However, with the expansion of the frame of discernment, the increasing number of focal elements processed during the fusion procedure leads to a rapid increase in computational complexity, which limits the practical application of BF theory. To overcome this issue, a novel multi-granularity belief structure (MGBS) method was proposed in this study. The construction of MGBS reduced the number of focal elements and preserved crucial information in the basic belief assignment. This effectively reduced the computational complexity of fusion while ensuring the highest possible classification accuracy. We applied the proposed MGBS algorithm to a human activity recognition task and verified its effectiveness using the University of California, Irvine mHealth, PAMAP2, and Smartphone datasets.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121577"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decomposition of pseudo-uninorms with continuous underlying functions via ordinal sum","authors":"Juraj Kalafut , Andrea Mesiarová-Zemánková","doi":"10.1016/j.ins.2024.121573","DOIUrl":"10.1016/j.ins.2024.121573","url":null,"abstract":"<div><div>The decomposition of all pseudo-uninorms with continuous underlying functions, defined on the unit interval, via Clifford's ordinal sum is described. It is shown that each such pseudo-uninorm can be decomposed into representable and trivial semigroups, and special semigroups defined on two points, where the corresponding semigroup operation is the projection to one of the coordinates. Linear orders, for which the ordinal sum of such semigroups yields a pseudo-uninorm, are also characterized.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121573"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Lu , Fuwang Wang , Junxiang Chen , Guilin Wen , Rongrong Fu
{"title":"Improving two-dimensional linear discriminant analysis with L1 norm for optimizing EEG signal","authors":"Bin Lu , Fuwang Wang , Junxiang Chen , Guilin Wen , Rongrong Fu","doi":"10.1016/j.ins.2024.121585","DOIUrl":"10.1016/j.ins.2024.121585","url":null,"abstract":"<div><div>Dimensionality reduction is a critical factor in processing high-dimensional datasets. The L1 norm-based Two-Dimensional Linear Discriminant Analysis (L1-2DLDA) is widely used for this purpose, but it remains sensitive to outliers and classes with large deviations, which deteriorates its performance. To address this limitation, the present study proposed Pairwise Sample Distance Two-Dimensional Linear Discriminant Analysis (PSD2DLDA), a novel method that modeled L1-2DLDA using pair-wise sample distances. To improve computational effectiveness, this study also introduced a streamlined variant, Pairwise Class Mean Distance Two-Dimensional Linear Discriminant Analysis (PCD2DLDA), which was based on distances between class mean pairs. Different from previous studies, this study utilized the projected sub-gradient method to optimize these two improved methods. Meanwhile, this study explored the interrelationship, limitations, and applicability of these two improved methods. The comparative experimental results on three datasets validated the outstanding performance of PSD2DLDA and PCD2DLDA methods. In particular, PSD2DLDA exhibited superior robustness compared to PCD2DLDA. Furthermore, applying these two methods to optimize electroencephalogram (EEG) signals effectively enhanced the decoding accuracy of motor imagery neural patterns, which offered a promising strategy for optimizing EEG signals processing in brain-computer interface (BCI) applications.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121585"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142538694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficiency analysis in bi-level on fuzzy input and output","authors":"Kh. Ghaziyani , F. Hosseinzadeh Lotfi , Sohrab Kordrostami , Alireza Amirteimoori","doi":"10.1016/j.ins.2024.121551","DOIUrl":"10.1016/j.ins.2024.121551","url":null,"abstract":"<div><div>To enhance the conventional framework of data envelope analysis (DEA), a novel hybrid bi-level model is proposed, integrating fuzzy logic with triangular fuzzy numbers to effectively address data uncertainty. This model innovatively departs from the traditional DEA’s ’black box’ approach by incorporating inter-organizational relationships and the internal dynamics of decision-making units (DMUs). Utilizing a modified Russell’s method, it provides a nuanced efficiency analysis in scenarios of ambiguous data. The study aims to enhance the accuracy and applicability of Data Envelopment Analysis in uncertain data environments. To achieve this, a novel hybrid bi-level model integrating fuzzy logic is presented. Validated through a case study involving 15 branches of a private Iranian bank, the model demonstrates improved accuracy in efficiency assessments and paves the way for future research in operational systems uncertainty management. The results indicated that, among the 15 branches of a private Iranian bank analyzed for the year 2022, branches 1, 10, and 11 demonstrated leader-level efficiency, while branch 3 exhibited follower-level efficiency, and branch 1 achieved overall efficiency. These branches attained an efficiency rating of <span><math><mrow><msup><mi>E</mi><mrow><mo>+</mo><mo>+</mo></mrow></msup></mrow></math></span>, signifying a high level of efficiency within the model’s parameters.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121551"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GKF-PUAL: A group kernel-free approach to positive-unlabeled learning with variable selection","authors":"Xiaoke Wang , Rui Zhu , Jing-Hao Xue","doi":"10.1016/j.ins.2024.121574","DOIUrl":"10.1016/j.ins.2024.121574","url":null,"abstract":"<div><div>Variable selection is important for classification of data with many irrelevant predicting variables, but it has not yet been well studied in positive-unlabeled (PU) learning, where classifiers have to be trained without labelled-negative instances. In this paper, we propose a group kernel-free PU classifier with asymmetric loss (GKF-PUAL) to achieve quadratic PU classification with group-lasso regularisation embedded for variable selection. We also propose a five-block algorithm to solve the optimization problem of GKF-PUAL. Our experimental results reveal the superiority of GKF-PUAL in both PU classification and variable selection, improving the baseline PUAL by more than 10% in F1-score across four benchmark datasets and removing over 70% of irrelevant variables on six benchmark datasets. The code for GKF-PUAL is at <span><span>https://github.com/tkks22123/GKF-PUAL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121574"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An enhanced competitive swarm optimizer with strongly robust sparse operator for large-scale sparse multi-objective optimization problem","authors":"Qinghua Gu , Liyao Rong , Dan Wang , Di Liu","doi":"10.1016/j.ins.2024.121569","DOIUrl":"10.1016/j.ins.2024.121569","url":null,"abstract":"<div><div>In the real world, the decision variables of large-scale sparse multi-objective problems are high-dimensional, and most Pareto optimal solutions are sparse. The balance of the algorithms is difficult to control, so it is challenging to deal with such problems in general. Therefore, An Enhanced Competitive Swarm Optimizer with Strongly Robust Sparse Operator (SR-ECSO) algorithm is proposed. Firstly, the strongly robust sparse functions which accelerate particles in the population better sparsity in decision space, are used in high-dimensional decision variables. Secondly, the diversity of sparse solutions is maintained, and the convergence balance of the algorithm is enhanced by the introduction of an adaptive random perturbation operator. Finally, the state of the particles is updated using a swarm optimizer to improve population competitiveness. To verify the proposed algorithm, we tested eight large-scale sparse benchmark problems, and the decision variables were set in three groups with 100, 500, and 1000 as examples. Experimental results show that the algorithm is promising for solving large-scale sparse optimization problems.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121569"},"PeriodicalIF":8.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}