Mathias Sawall , Christoph Kubis , Benedict N. Leidecker , Lukas Prestin , Tomass Andersons , Martina Beese , Jan Hellwig , Robert Franke , Armin Börner , Klaus Neymeyr
{"title":"An automated Peak Group Analysis for vibrational spectra analysis","authors":"Mathias Sawall , Christoph Kubis , Benedict N. Leidecker , Lukas Prestin , Tomass Andersons , Martina Beese , Jan Hellwig , Robert Franke , Armin Börner , Klaus Neymeyr","doi":"10.1016/j.chemolab.2024.105234","DOIUrl":"10.1016/j.chemolab.2024.105234","url":null,"abstract":"<div><div>Peak Group Analysis (PGA) is a multivariate curve resolution technique that attempts to extract single pure component spectra from time series of spectral mixture data. It requires that the mixture spectra consist of relatively sharp peaks, as is typical in IR and Raman spectroscopy. PGA aims to construct from individual peaks the associated pure component spectra in the form of nonnegative linear combinations of the right singular vectors of the spectral data matrix.</div><div>This work presents an automated PGA (autoPGA) that starts with upstream peak detection applied to time series of spectra, combining different window-based peak detection techniques with balanced peak acceptance criteria and peak grouping to deal with repeated detections. The next step is a single-spectrum-oriented PGA analysis. This is followed by a downstream correlation analysis to identify pure component spectra that occur multiple times. AutoPGA provides a complete pure component factorization of the matrix of measured data. The algorithm is applied to FT-IR data sets on various rhodium carbonyl complexes and from an equilibrium of iridium complexes.</div></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105234"},"PeriodicalIF":3.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142421519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data pre-processing for paper-based colorimetric sensor arrays","authors":"Bahram Hemmateenejad , Knut Baumann","doi":"10.1016/j.chemolab.2024.105237","DOIUrl":"10.1016/j.chemolab.2024.105237","url":null,"abstract":"<div><div>The responses of the paper-based colorimetric sensor arrays are typically recorded by an imaging device. The color values of the images are subjected to chemometrics data analysis, with a view to extract the relevant information. As is the case with data extracted from other analytical instruments, these data must undergo pre-processing prior to undergoing further analysis. This study represents the first comprehensive and systematic investigation into the impact of data pre-processing techniques on the quality of subsequent data analysis methods applied to imaging data collected from paper-based colorimetric sensor arrays. The use of color difference data (calculated by subtracting the images of the sensors before exposure from those after exposure) revealed that pre-treatment of the data was not a critical factor, although it could reduce the complexity of the model. For example, the number of principal components in the principal component-linear discriminant analysis model was reduced from eight (for data that had not been pre-processed) to three (for pre-processed data) to achieve the same level of accuracy (92 %). Nevertheless, the pivotal role of data pre-processing was elucidated through the examination of data sets collected immediately following exposure to the samples’ vapor. It was demonstrated that the use of an appropriate pre-processing method allows for the elimination or significant reduction of between-sensor variations, obviating the necessity for the inclusion of data from images taken prior to exposure. With regard to the objective of classification, the object pre-processing methods that demonstrated particular promise were mean (or median) centering, Pareto scaling and standard normal variate. To illustrate, in the analysis of volatile organic compounds by an array of metallic nanoparticles, the cross-validation classification accuracy of the unprocessed data, which was 70 %, increased to 95 % when unit variance scaling and range scaling were applied to objects and variables, respectively. In the calibration phase, the majority of pre-processing methods enhanced the quality of the regression models. Using suitable pre-processing methods for both objects and variables, eliminated the need for using the before exposing image of the CSAs.</div></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105237"},"PeriodicalIF":3.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joan Borràs-Ferrís , Carl Duchesne , Alberto Ferrer
{"title":"Multivariate SPC via sequential multiblock-PLS","authors":"Joan Borràs-Ferrís , Carl Duchesne , Alberto Ferrer","doi":"10.1016/j.chemolab.2024.105236","DOIUrl":"10.1016/j.chemolab.2024.105236","url":null,"abstract":"<div><div>The sequential multi-block partial least squares (SMB-PLS) is proposed for implementing a multivariate statistical process control scheme. This is of interest when the system is composed of several blocks following a sequential order and presenting correlated information, for instance, a raw material properties block followed by a process variables block that is manipulated according to raw material properties. The SMB-PLS uses orthogonalization to separate correlated information between blocks from orthogonal variations. This allows monitoring the system in different stages considering only the remaining orthogonal part in each block. Thus, the SMB-PLS increases the interpretability and process understanding in the model building (Phase I), since it provides a deep insight about the nature of the system variations. Besides, it prevents any special cause from propagating to subsequent blocks enabling their use in the model exploitation (Phase II). The methodology is applied to a real case study from a food manufacturing process.</div></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105236"},"PeriodicalIF":3.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142421520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A scalable, data analytics workflow for image-based morphological profiles","authors":"Edvin Forsgren , Olivier Cloarec , Pär Jonsson , Gillian Lovell , Johan Trygg","doi":"10.1016/j.chemolab.2024.105232","DOIUrl":"10.1016/j.chemolab.2024.105232","url":null,"abstract":"<div><p>Cell Painting is an established community-based microscopy-assay platform that provides high-throughput, high-content data for biological readouts. In November 2022, the JUMP-Cell Painting Consortium released the largest publicly available Cell Painting dataset with CellProfiler features, comprising more than 2 billion cell images. This dataset is designed for predicting the activity and toxicity of 115k drug compounds, with the aim to make cell images as computable as genomes and transcriptomes. In this context, our paper introduces a scalable and computationally efficient data analytics workflow created to meet the needs of researchers. This data-driven workflow facilitates the comparison of drug treatment effects through significant and biologically relevant insights. The workflow consists of two parts: first, the Equivalence score (Eq. score), a straightforward yet sophisticated metric highlighting relevant deviations from negative controls based on cell image morphology; second, the scalability of the workflow, by utilizing the Eq. scores on a large scale to predict and classify the subtle morphological changes in cell image profiles. By doing so, we show classification improvements compared to using the raw CellProfiler features on the CPJUMP1-pilot dataset on three types of perturbations.</p><p>We hope that our workflow’s contributions will enhance drug screening efficiency and streamline the drug development process. As this process is resource-intensive, every incremental improvement is valuable. Through our collective efforts in advancing the understanding of high-throughput image-based data, we aim to reduce both the time and cost of developing new, life-saving treatments.</p></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105232"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169743924001722/pdfft?md5=8447cc5a34c516ef2e46efef43419f28&pid=1-s2.0-S0169743924001722-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142270415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward effective SVM sample reduction based on fuzzy membership functions","authors":"Tinghua Wang, Daili Zhang, Hanming Liu","doi":"10.1016/j.chemolab.2024.105233","DOIUrl":"10.1016/j.chemolab.2024.105233","url":null,"abstract":"<div><p>Support vector machine (SVM) is known for its good generalization performance and wide application in various fields. Despite its success, the learning efficiency of SVM decreases significantly originating from the assumption that the number of training samples increases rapidly. Consequently, the traditional SVM with standard optimization methods faces challenges such as excessive memory requirements and slow training speed, especially for large-scale training sets. To address this issue, this paper draws inspiration from the fuzzy support vector machine (FSVM). Considering that each sample has varying contributions to the decision plane, we propose an effective SVM sample reduction method based on the fuzzy membership function (FMF). This method uses FMF to calculate the fuzzy membership of each training sample. Training samples with low fuzzy memberships are then deleted. Specifically, we propose SVM sample reduction algorithms based on class center distance, kernel target alignment, centered kernel alignment, slack factor, entropy, and bilateral weighted FMF, respectively. Comprehensive experiments on UCI and KEEL datasets demonstrate that our proposed algorithms outperform other comparative methods in terms of accuracy, F-measure, and hinge-loss measures.</p></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105233"},"PeriodicalIF":3.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Reza Keyvanpour , Yasaman Asghari , Soheila Mehrmolaei
{"title":"HEnsem_DTIs: A heterogeneous ensemble learning model for drug-target interactions prediction","authors":"Mohammad Reza Keyvanpour , Yasaman Asghari , Soheila Mehrmolaei","doi":"10.1016/j.chemolab.2024.105224","DOIUrl":"10.1016/j.chemolab.2024.105224","url":null,"abstract":"<div><p>Drug discovery is the process by which a drug is discovered. Drug-target interactions prediction is a major part of drug discovery. Unfortunately, producing new drugs is time-consuming and expensive; Because it requires a lot of human and laboratory resources. Recently, predictions have been made using computational methods to solve these problems and prevent blindly examining all interactions. Various experiences using computational methods show that no single algorithm can be suitable for all applications; Hence, ensemble learning is expressed. Although various ensemble methods have been proposed, it is still not easy to find a suitable ensemble method for a particular dataset. In general, the existing algorithms in aggregation and combination method are selected manually based on experience. Reinforcement learning can be one way to meet this challenge. High-dimensional feature space and class imbalance are among the challenges of drug-target interactions prediction. This paper proposes HEnsem_DTIs, a heterogeneous ensemble model, for predicting drug-target interactions using dimensionality reduction and concepts of recommender systems to address these challenges. HEnsem_DTIs is configured with reinforcement learning. Dimensionality reduction is applied to handle the challenge of high-dimensional feature space and recommender systems to improve under-sampling and solve the class imbalance challenge. Six datasets are used to evaluate the proposed model; Results of the evaluation on datasets show that HEnsem_DTIs works better than other models in this field. Results of evaluation of the proposed model on the first dataset using 10-fold cross-validation experiments show the amount of sensitivity 0.896, specificity 0.954, GM 0.924, AUC 0.930 and AUPR 0.935.</p></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"253 ","pages":"Article 105224"},"PeriodicalIF":3.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Random projection ensemble conformal prediction for high-dimensional classification","authors":"Xiaoyu Qian , Jinru Wu , Ligong Wei , Youwu Lin","doi":"10.1016/j.chemolab.2024.105225","DOIUrl":"10.1016/j.chemolab.2024.105225","url":null,"abstract":"<div><p>In classification problems, many models with superior performance fail to provide confidence estimates or intervals for each prediction. This lack of reliability poses risks in real-world applications, making these models difficult to trust. Conformal prediction, as distribution-free and model-free approaches with finite-sample coverage guarantee, have recently been widely used to construct prediction sets for classification models. However, traditional conformal prediction methods only produce set-valued results without specifying a definitive predicted class. Particularly in complex settings, these methods fail to assist models in effectively addressing challenges such as high dimensionality, resulting in ambiguous prediction sets with low statistical efficiency, i.e. the prediction sets contain many false classes. In this study, a novel Ensemble Conformal Prediction algorithm based on Random Projection and a designed voting strategy, RPECP, is developed to tackle these challenges. Initially, a procedure for selecting the approximately oracle random projections and classifiers is executed to best leverage the internal information and structure of the data. Subsequently, based on the approximately oracle random projections and underlying classifiers, conformal prediction is performed on new test samples in a lower-dimensional space, resulting in multiple independent prediction sets. Finally, an accurate predicted class and a precise prediction set with high coverage and statistical efficiency are produced through a designed voting strategy. Compared to several base classifiers, RPECP obtain higher classification accuracy; against other conformal prediction algorithms, it achieves less ambiguous prediction sets with fewer false classes while guaranteeing high coverage. For illustration, this paper demonstrates RPECP's superiority over other methods in four cases: two high-dimensional settings and two real-world datasets.</p></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"253 ","pages":"Article 105225"},"PeriodicalIF":3.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142147568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"G-CovSel: Covariance oriented variable clustering","authors":"Jean-Michel Roger , Alessandra Biancolillo , Bénédicte Favreau , Federico Marini","doi":"10.1016/j.chemolab.2024.105223","DOIUrl":"10.1016/j.chemolab.2024.105223","url":null,"abstract":"<div><p>Dimensionality reduction is an essential step in the processing of analytical chemistry data. When this reduction is carried out by variable selection, it can enable the identification of biochemical pathways. CovSel has been developed to meet this requirement, through a parsimonious selection of non-redundant variables. This article presents the g-CovSel method, which modifies the CovSel algorithm to produce highly complementary groups containing highly correlated variables. This modification requires the theoretical definition of the groups' construction and of the deflation of the data with respect to the selected groups. Two applications, on two extreme case studies, are presented. The first, based on near-infrared spectra related to four chemicals, demonstrates the relevance of the selected groups and the method's ability to handle highly correlated variables. The second, based on genomic data, demonstrates the method's ability to handle very highly multivariate data. Most of the groups formed can be interpreted from a functional point of view, making g-CovSel a tool of choice for biomarker identification in omics. Further work will be carried out to generalize g-CovSel to multi-block and multi-way data.</p></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105223"},"PeriodicalIF":3.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169743924001631/pdfft?md5=52fb71b18968f61fe29df549f8fc05f7&pid=1-s2.0-S0169743924001631-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142151154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Shan , Hongming Xiao , Xiang Li , Ruige Yang , Lin Zhang , Yuliang Zhao
{"title":"Enhancing quantitative 1H NMR model generalizability on honey from different years through partial least squares subspace and optimal transport based unsupervised domain adaptation","authors":"Peng Shan , Hongming Xiao , Xiang Li , Ruige Yang , Lin Zhang , Yuliang Zhao","doi":"10.1016/j.chemolab.2024.105221","DOIUrl":"10.1016/j.chemolab.2024.105221","url":null,"abstract":"<div><div>Honey is a nourishing and natural food product that is widely favored by a diverse group of consumers. Proton Nuclear Magnetic Resonance (<sup>1</sup>H NMR) is a powerful tool for quantitative analysis of honey and plays a crucial role in ensuring its quality. The <sup>1</sup>H NMR technique necessitates the utilization of multivariate calibration models to facilitate the quantitative analysis of key compounds present in honey. However, maintaining consistent measurement conditions across different years is scarcely possible, which can significantly impact the distribution of training and test spectra, ultimately leading to reduced performance of predictive models. Unsupervised domain adaptation (UDA) methods have gained considerable attention for their ability to match distribution differences between the labeled source spectra and the unlabeled target spectra without costly annotation. To enhance the quantitative model generalizability on honey from different years, we propose a UDA method known as partial least squares subspace and optimal transport-based UDA (PLSS-OT-UDA). This approach eliminates distribution differences between the source subspace and target subspace via partial least squares (PLS) dimensionality reduction and OT. Firstly, the optimal latent variable weight matrix from the source domain (i.e., labeled <sup>1</sup>H NMR data in 2017) is extracted with PLS. Next, the dimension of both source and target domains (i.e., unlabeled <sup>1</sup>H NMR data in 2018) is reduced and their corresponding subspaces are obtained with weight matrix of the source domain. Finally, OT is then employed to align the distribution of the source and target domains within the subspace. Experimental results on the honey dataset demonstrate that the PLSS-OT-UDA outperforms traditional methods, including transfer component analysis (TCA), optimal transport for domain adaptation (OTDA), domain adaptation based on principal component analysis and optimal transport (PCA-OTDA), and subspace alignment (SA), with respect to generalization performance on three components: baume degree, sugar content, and water content.</div></div>","PeriodicalId":9774,"journal":{"name":"Chemometrics and Intelligent Laboratory Systems","volume":"254 ","pages":"Article 105221"},"PeriodicalIF":3.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}