{"title":"Selective bin model for reversible data hiding in encrypted images","authors":"Ruchi Agarwal, Sara Ahmed, Manoj Kumar","doi":"10.1007/s10044-024-01220-z","DOIUrl":"https://doi.org/10.1007/s10044-024-01220-z","url":null,"abstract":"<p>In tandem with the fast-growing technology, the issue of secure data transmission over the Internet has achieved increasing importance. In digital media, enclosing data in images is one of the most common methods for communicating confidential information. A novel reversible data hiding in the encrypted images scheme based on selective bin models is proposed in this paper. The scheme focuses on enhancing the embedding capacity while ensuring the security of images with the help of encryption and the proposed data hiding process. For data embedding, lossless compression is utilized and the image is classified into three bins. Then, marker bits are assigned to these bins for distinguishing between embeddable and non-embeddable regions. The proposed method shows a satisfactory embedding rate for smooth images as well as complex ones due to its selective bin approach. Also, the method is separable in nature, i.e., data extraction and image recovery can be performed independently. Furthermore, the experimental results demonstrate the strategy’s effectiveness when compared with others.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"32 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140011526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subdomain adaptation via correlation alignment with entropy minimization for unsupervised domain adaptation","authors":"Obsa Gilo, Jimson Mathew, Samrat Mondal, Rakesh Kumar Sandoniya","doi":"10.1007/s10044-024-01232-9","DOIUrl":"https://doi.org/10.1007/s10044-024-01232-9","url":null,"abstract":"<p>Unsupervised domain adaptation (UDA) is a well-explored domain in transfer learning, finding applications across various real-world scenarios. The central challenge in UDA lies in addressing the domain shift between training (source) and testing (target) data distributions. This study focuses on image classification tasks within UDA, where label spaces are shared, but the target domain lacks labeled samples. Our primary objective revolves around mitigating the domain discrepancies between the source and target domains, ultimately facilitating robust generalization in the target domains. Domain adaptation techniques have traditionally concentrated on the global feature distribution to minimize disparities. However, these methods often need to pay more attention to crucial, domain-specific subdomain information within identical classification categories, challenging achieving the desired performance without fine-grained data. To tackle these challenges, we propose a unified framework, Subdomain Adaptation via Correlation Alignment with Entropy Minimization, for unsupervised domain adaptation. Our approach incorporates three advanced techniques: (1) Local Maximum Mean Discrepancy, which aligns the means of local feature subsets, capturing intrinsic subdomain alignments often missed by global alignment, (2) correlation alignment aimed at minimizing the correlation between domain distributions, and (3) entropy regularization applied to target domains to encourage low-density separation between categories. We validate our proposed methods through rigorous experimental evaluations and ablation studies on standard benchmark datasets. The results consistently demonstrate the superior performance of our approaches compared to existing state-of-the-art domain adaptation methods.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"253 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear dimensionality reduction with q-Gaussian distribution","authors":"Motoshi Abe, Yuichiro Nomura, Takio Kurita","doi":"10.1007/s10044-024-01210-1","DOIUrl":"https://doi.org/10.1007/s10044-024-01210-1","url":null,"abstract":"<p>In recent years, the dimensionality reduction has become more important as the number of dimensions of data used in various tasks such as regression and classification has increased. As popular nonlinear dimensionality reduction methods, t-distributed stochastic neighbor embedding (t-SNE) and uniform manifold approximation and projection (UMAP) have been proposed. However, the former outputs only one low-dimensional space determined by the t-distribution and the latter is difficult to control the distribution of distance between each pair of samples in low-dimensional space. To tackle these issues, we propose novel t-SNE and UMAP extended by q-Gaussian distribution, called q-Gaussian-distributed stochastic neighbor embedding (q-SNE) and q-Gaussian-distributed uniform manifold approximation and projection (q-UMAP). The q-Gaussian distribution is a probability distribution derived by maximizing the tsallis entropy by escort distribution with mean and variance, and a generalized version of Gaussian distribution with a hyperparameter q. Since the shape of the q-Gaussian distribution can be tuned smoothly by the hyperparameter q, q-SNE and q-UMAP can in- tuitively derive different embedding spaces. To show the quality of the proposed method, we compared the visualization of the low-dimensional embedding space and the classification accuracy by k-NN in the low-dimensional space. Empirical results on MNIST, COIL-20, OliverttiFaces and FashionMNIST demonstrate that the q-SNE and q-UMAP can derive better embedding spaces than t-SNE and UMAP.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"29 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information theory divergences in principal component analysis","authors":"Eduardo K. Nakao, Alexandre L. M. Levada","doi":"10.1007/s10044-024-01215-w","DOIUrl":"https://doi.org/10.1007/s10044-024-01215-w","url":null,"abstract":"<p>The metric learning area studies methodologies to find the most appropriate distance function for a given dataset. It was shown that dimensionality reduction algorithms are closely related to metric learning because, in addition to obtaining a more compact representation of the data, such methods also implicitly derive a distance function that best represents similarity between a pair of objects in the collection. Principal Component Analysis is a traditional linear dimensionality reduction algorithm that is still widely used by researchers. However, its procedure faithfully represents outliers in the generated space, which can be an undesirable characteristic in pattern recognition applications. With this is mind, it was proposed the replacement of the traditional punctual approach by a contextual one based on the data samples neighborhoods. This approach implements a mapping from the usual feature space to a parametric feature space, where the difference between two samples is defined by the vector whose scalar coordinates are given by the statistical divergence between two probability distributions. It was demonstrated for some divergences that the new approach outperforms several existing dimensionality reduction algorithms in a wide range of datasets. Although, it is important to investigate the framework divergence sensitivity. Experiments using Total Variation, Renyi, Sharma-Mittal and Tsallis divergences are exhibited in this paper and the results evidence the method robustness.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"18 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep learning approach to censored regression","authors":"Vlad-Rareş Dănăilă, Cătălin Buiu","doi":"10.1007/s10044-024-01216-9","DOIUrl":"https://doi.org/10.1007/s10044-024-01216-9","url":null,"abstract":"<p>In censored regression, the outcomes are a mixture of known values (uncensored) and open intervals (censored), meaning that the outcome is either known with precision or is an unknown value above or below a known threshold. The use of censored data is widespread, and correctly modeling it is essential for many applications. Although the literature on censored regression is vast, deep learning approaches have been less frequently applied. This paper proposes three loss functions for training neural networks on censored data using gradient backpropagation: the tobit likelihood, the censored mean squared error, and the censored mean absolute error. We experimented with three variations in the tobit likelihood that arose from different ways of modeling the standard deviation variable: as a fixed value, a reparametrization, and an estimation using a separate neural network for heteroscedastic data. The tobit model yielded better results, but the other two losses are simpler to implement. Another central idea of our research was that data are often censored and truncated simultaneously. The proposed losses can handle simultaneous censoring and truncation at arbitrary values from above and below.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"52 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local complex features learned by randomized neural networks for texture analysis","authors":"","doi":"10.1007/s10044-024-01230-x","DOIUrl":"https://doi.org/10.1007/s10044-024-01230-x","url":null,"abstract":"<h3>Abstract</h3> <p>Texture is a visual attribute largely used in many problems of image analysis. Many methods that use learning techniques have been proposed for texture discrimination, achieving improved performance over previous handcrafted methods. In this paper, we present a new approach that combines a learning technique and the complex network (CN) theory for texture analysis. This method takes advantage of the representation capacity of CN to model a texture image as a directed network and then uses the topological information of vertices to train a randomized neural network. This neural network has a single hidden layer and uses a fast learning algorithm to learn local CN patterns for texture characterization. Thus, we use the weights of the trained neural network to compose a feature vector. These feature vectors are evaluated in a classification experiment in four widely used image databases. Experimental results show a high classification performance of the proposed method compared to other methods, indicating that our approach can be used in many image analysis problems.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"148 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Big topic modeling based on a two-level hierarchical latent Beta-Liouville allocation for large-scale data and parameter streaming","authors":"Koffi Eddy Ihou, Nizar Bouguila","doi":"10.1007/s10044-024-01213-y","DOIUrl":"https://doi.org/10.1007/s10044-024-01213-y","url":null,"abstract":"<p>As an extension to the standard symmetric latent Dirichlet allocation topic model, we implement asymmetric Beta-Liouville as a conjugate prior to the multinomial and therefore propose the maximum a posteriori for latent Beta-Liouville allocation as an alternative to maximum likelihood estimator for models such as probabilistic latent semantic indexing, unigrams, and mixture of unigrams. Since most Bayesian posteriors, for complex models, are intractable in general, we propose a point estimate (the mode) that offers a much tractable solution. The maximum a posteriori hypotheses using point estimates are much easier than full Bayesian analysis that integrates over the entire parameter space. We show that the proposed maximum a posteriori reduces the three-level hierarchical latent Beta-Liouville allocation to two-level topic mixture as we marginalize out the latent variables. In each document, the maximum a posteriori provides a soft assignment and constructs dense expectation–maximization probabilities over each word (responsibilities) for accurate estimates. For simplicity, we present a stochastic at word-level online expectation–maximization algorithm as an optimization method for maximum a posteriori latent Beta-Liouville allocation estimation whose unnormalized reparameterization is equivalent to a stochastic collapsed variational Bayes. This implicit connection between the collapsed space and expectation–maximization-based maximum a posteriori latent Beta-Liouville allocation shows its flexibility and helps in providing alternative to model selection. We characterize efficiency in the proposed approach for its ability to simultaneously stream both large-scale data and parameters seamlessly. The performance of the model using predictive perplexities as evaluation method shows the robustness of the proposed technique with text document datasets.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"80 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thaína A. Azevedo Tosta, Paulo Rogério de Faria, Leandro Alves Neves, Alessandro Santana Martins, Chetna Kaushal, Marcelo Zanchetta do Nascimento
{"title":"Evaluation of sparsity metrics and evolutionary algorithms applied for normalization of H&E histological images","authors":"Thaína A. Azevedo Tosta, Paulo Rogério de Faria, Leandro Alves Neves, Alessandro Santana Martins, Chetna Kaushal, Marcelo Zanchetta do Nascimento","doi":"10.1007/s10044-024-01218-7","DOIUrl":"https://doi.org/10.1007/s10044-024-01218-7","url":null,"abstract":"<p>Color variations in H&E histological images can impact the segmentation and classification stages of computational systems used for cancer diagnosis. To address these variations, normalization techniques can be applied to adjust the colors of histological images. Estimates of stain color appearance matrices and stain density maps can be employed to carry out these color adjustments. This study explores these estimates by leveraging a significant biological characteristic of stain mixtures, which is represented by a sparsity parameter. Computationally estimating this parameter can be accomplished through various sparsity measures and evolutionary algorithms. Therefore, this study aimed to evaluate the effectiveness of different sparsity measures and algorithms for color normalization of H&E-stained histological images. The results obtained demonstrated that the choice of different sparsity measures significantly impacts the outcomes of normalization. The sparsity metric <span>(l_{epsilon }^{0})</span> proved to be the most suitable for it. Conversely, the evolutionary algorithms showed little variations in the conducted quantitative analyses. Regarding the selection of the best evolutionary algorithm, the results indicated that particle swarm optimization with a population size of 250 individuals is the most appropriate choice.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"34 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical contrastive learning and color standardization for single image sand-dust removal","authors":"","doi":"10.1007/s10044-024-01231-w","DOIUrl":"https://doi.org/10.1007/s10044-024-01231-w","url":null,"abstract":"<h3>Abstract</h3> <p>Convolutional neural networks (CNN) have demonstrated impressive performance in reconstructing images in challenging environments. However, there is still a blank in the field of CNN-based sandstorm image processing. Existing sandstorm removal algorithms enhance degraded images by using prior knowledge, but often fail to address the issues of color cast, low contrast, and poor recognizability. To bridge the gap, we present a novel end-to-end sand-dust reconstruction network and incorporate hierarchical contrastive regularization and color constraint in the network. Based on contrastive learning, the hierarchical contrastive regularization reconstructs the sand-free image by pulling it closer to ’positive’ pairs while pushing it away from ’negative’ pairs in representation space. Furthermore, considering the specific characteristics of sandstorm images, we introduce the color constraint term as a sub-loss function to balance the hue, saturation, and value of the reconstructed image. Experimental results show that the proposed SdR-Net outperforms state-of-the-arts in both quantitative and qualitative.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"10 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140011571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesús Arriaga-Hernández, Bolivia Cuevas-Otahola, José J. Oliveros-Oliveros, María M. Morín-Castillo
{"title":"Phase analysis simulating the Takeda method to obtain a 3D profile of SARS-CoV-2 cells","authors":"Jesús Arriaga-Hernández, Bolivia Cuevas-Otahola, José J. Oliveros-Oliveros, María M. Morín-Castillo","doi":"10.1007/s10044-024-01225-8","DOIUrl":"https://doi.org/10.1007/s10044-024-01225-8","url":null,"abstract":"<p>In this work, we propose a morphologic analysis by means of the construction of 3D models of the SARS-CoV-2 VP (viral particles) with algorithms in Python and Matlab based on the processing of frames. To this aim, we simulate the Takeda method to induce periodicity and apply the Fourier transform to obtain the phase of objects under analysis. To this aim, we analyze several research works focused on infected tissues by SARS-CoV-2 virus culture cells, highlighting the obtained medical images of the virus from microscopy and tomography. We optimize the results by performing image processing (segmentation and periodic noise removal) in order to obtain an accurate ROI (Region of Interest) segmentation containing only information on SARS-CoV-2 cells. We apply our algorithm to these images (3D tomographic medical images) to simulate the Takeda method (which also filters the image), considering the periodicity induced by us in the image to carry out a phase unwrapping process. Finally, we use the image phase to focus on the body, center (RNA, Protein M-N), and spikes (Protein S) of the SARS-CoV-2 cells to identify them as characteristic elements of the SARS-CoV-2 virion morphology to build a 3D model based only in the metadata of clinical studies on cell cultures. The latter results in the construction of a mathematical, physical, biological, and numerical model of the SARS-CoV-2 virion, a tool with volumes, or 3D non-speculative or animated models, based only on medical images (3D tomography) in clinical tests, faithful to the virus.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"24 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140009360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}