arXiv - STAT - Machine Learning最新文献

筛选
英文 中文
PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities PieClam:基于重叠包容和排斥社区的通用图自动编码器
arXiv - STAT - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11618
Daniel Zilberg, Ron Levie
{"title":"PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities","authors":"Daniel Zilberg, Ron Levie","doi":"arxiv-2409.11618","DOIUrl":"https://doi.org/arxiv-2409.11618","url":null,"abstract":"We propose PieClam (Prior Inclusive Exclusive Cluster Affiliation Model): a\u0000probabilistic graph model for representing any graph as overlapping generalized\u0000communities. Our method can be interpreted as a graph autoencoder: nodes are\u0000embedded into a code space by an algorithm that maximizes the log-likelihood of\u0000the decoded graph, given the input graph. PieClam is a community affiliation\u0000model that extends well-known methods like BigClam in two main manners. First,\u0000instead of the decoder being defined via pairwise interactions between the\u0000nodes in the code space, we also incorporate a learned prior on the\u0000distribution of nodes in the code space, turning our method into a graph\u0000generative model. Secondly, we generalize the notion of communities by allowing\u0000not only sets of nodes with strong connectivity, which we call inclusive\u0000communities, but also sets of nodes with strong disconnection, which we call\u0000exclusive communities. To model both types of communities, we propose a new\u0000type of decoder based the Lorentz inner product, which we prove to be much more\u0000expressive than standard decoders based on standard inner products or norm\u0000distances. By introducing a new graph similarity measure, that we call the log\u0000cut distance, we show that PieClam is a universal autoencoder, able to\u0000uniformly approximately reconstruct any graph. Our method is shown to obtain\u0000competitive performance in graph anomaly detection benchmarks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cartan moving frames and the data manifolds 卡坦动帧和数据流形
arXiv - STAT - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.12057
Eliot Tron, Rita Fioresi, Nicolas Couellan, Stéphane Puechmorel
{"title":"Cartan moving frames and the data manifolds","authors":"Eliot Tron, Rita Fioresi, Nicolas Couellan, Stéphane Puechmorel","doi":"arxiv-2409.12057","DOIUrl":"https://doi.org/arxiv-2409.12057","url":null,"abstract":"The purpose of this paper is to employ the language of Cartan moving frames\u0000to study the geometry of the data manifolds and its Riemannian structure, via\u0000the data information metric and its curvature at data points. Using this\u0000framework and through experiments, explanations on the response of a neural\u0000network are given by pointing out the output classes that are easily reachable\u0000from a given input. This emphasizes how the proposed mathematical relationship\u0000between the output of the network and the geometry of its inputs can be\u0000exploited as an explainable artificial intelligence tool.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks 基于对称性结构矩阵的高效近似等价网络
arXiv - STAT - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11772
Ashwin Samudre, Mircea Petrache, Brian D. Nord, Shubhendu Trivedi
{"title":"Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks","authors":"Ashwin Samudre, Mircea Petrache, Brian D. Nord, Shubhendu Trivedi","doi":"arxiv-2409.11772","DOIUrl":"https://doi.org/arxiv-2409.11772","url":null,"abstract":"There has been much recent interest in designing symmetry-aware neural\u0000networks (NNs) exhibiting relaxed equivariance. Such NNs aim to interpolate\u0000between being exactly equivariant and being fully flexible, affording\u0000consistent performance benefits. In a separate line of work, certain structured\u0000parameter matrices -- those with displacement structure, characterized by low\u0000displacement rank (LDR) -- have been used to design small-footprint NNs.\u0000Displacement structure enables fast function and gradient evaluation, but\u0000permits accurate approximations via compression primarily to classical\u0000convolutional neural networks (CNNs). In this work, we propose a general\u0000framework -- based on a novel construction of symmetry-based structured\u0000matrices -- to build approximately equivariant NNs with significantly reduced\u0000parameter counts. Our framework integrates the two aforementioned lines of work\u0000via the use of so-called Group Matrices (GMs), a forgotten precursor to the\u0000modern notion of regular representations of finite groups. GMs allow the design\u0000of structured matrices -- resembling LDR matrices -- which generalize the\u0000linear operations of a classical CNN from cyclic groups to general finite\u0000groups and their homogeneous spaces. We show that GMs can be employed to extend\u0000all the elementary operations of CNNs to general discrete groups. Further, the\u0000theory of structured matrices based on GMs provides a generalization of LDR\u0000theory focussed on matrices with cyclic structure, providing a tool for\u0000implementing approximate equivariance for discrete groups. We test GM-based\u0000architectures on a variety of tasks in the presence of relaxed symmetry. We\u0000report that our framework consistently performs competitively compared to\u0000approximately equivariant NNs, and other structured matrix-based compression\u0000frameworks, sometimes with a one or two orders of magnitude lower parameter\u0000count.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Unstable Continuous-Time Stochastic Linear Control Systems 学习不稳定的连续时间随机线性控制系统
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11327
Reza Sadeghi Hafshejani, Mohamad Kazem Shirani Fradonbeh
{"title":"Learning Unstable Continuous-Time Stochastic Linear Control Systems","authors":"Reza Sadeghi Hafshejani, Mohamad Kazem Shirani Fradonbeh","doi":"arxiv-2409.11327","DOIUrl":"https://doi.org/arxiv-2409.11327","url":null,"abstract":"We study the problem of system identification for stochastic continuous-time\u0000dynamics, based on a single finite-length state trajectory. We present a method\u0000for estimating the possibly unstable open-loop matrix by employing properly\u0000randomized control inputs. Then, we establish theoretical performance\u0000guarantees showing that the estimation error decays with trajectory length, a\u0000measure of excitability, and the signal-to-noise ratio, while it grows with\u0000dimension. Numerical illustrations that showcase the rates of learning the\u0000dynamics, will be provided as well. To perform the theoretical analysis, we\u0000develop new technical tools that are of independent interest. That includes\u0000non-asymptotic stochastic bounds for highly non-stationary martingales and\u0000generalized laws of iterated logarithms, among others.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"119 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent mixed-effect models for high-dimensional longitudinal data 高维纵向数据的潜在混合效应模型
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11008
Priscilla Ong, Manuel Haußmann, Otto Lönnroth, Harri Lähdesmäki
{"title":"Latent mixed-effect models for high-dimensional longitudinal data","authors":"Priscilla Ong, Manuel Haußmann, Otto Lönnroth, Harri Lähdesmäki","doi":"arxiv-2409.11008","DOIUrl":"https://doi.org/arxiv-2409.11008","url":null,"abstract":"Modelling longitudinal data is an important yet challenging task. These\u0000datasets can be high-dimensional, contain non-linear effects and time-varying\u0000covariates. Gaussian process (GP) prior-based variational autoencoders (VAEs)\u0000have emerged as a promising approach due to their ability to model time-series\u0000data. However, they are costly to train and struggle to fully exploit the rich\u0000covariates characteristic of longitudinal data, making them difficult for\u0000practitioners to use effectively. In this work, we leverage linear mixed models\u0000(LMMs) and amortized variational inference to provide conditional priors for\u0000VAEs, and propose LMM-VAE, a scalable, interpretable and identifiable model. We\u0000highlight theoretical connections between it and GP-based techniques, providing\u0000a unified framework for this class of methods. Our proposal performs\u0000competitively compared to existing approaches across simulated and real-world\u0000datasets.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"212 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the generalization ability of coarse-grained molecular dynamics models for non-equilibrium processes 论粗粒度分子动力学模型对非平衡态过程的泛化能力
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11519
Liyao Lyu, Huan Lei
{"title":"On the generalization ability of coarse-grained molecular dynamics models for non-equilibrium processes","authors":"Liyao Lyu, Huan Lei","doi":"arxiv-2409.11519","DOIUrl":"https://doi.org/arxiv-2409.11519","url":null,"abstract":"One essential goal of constructing coarse-grained molecular dynamics (CGMD)\u0000models is to accurately predict non-equilibrium processes beyond the atomistic\u0000scale. While a CG model can be constructed by projecting the full dynamics onto\u0000a set of resolved variables, the dynamics of the CG variables can recover the\u0000full dynamics only when the conditional distribution of the unresolved\u0000variables is close to the one associated with the particular projection\u0000operator. In particular, the model's applicability to various non-equilibrium\u0000processes is generally unwarranted due to the inconsistency in the conditional\u0000distribution. Here, we present a data-driven approach for constructing CGMD\u0000models that retain certain generalization ability for non-equilibrium\u0000processes. Unlike the conventional CG models based on pre-selected CG variables\u0000(e.g., the center of mass), the present CG model seeks a set of auxiliary CG\u0000variables based on the time-lagged independent component analysis to minimize\u0000the entropy contribution of the unresolved variables. This ensures the\u0000distribution of the unresolved variables under a broad range of non-equilibrium\u0000conditions approaches the one under equilibrium. Numerical results of a polymer\u0000melt system demonstrate the significance of this broadly-overlooked metric for\u0000the model's generalization ability, and the effectiveness of the present CG\u0000model for predicting the complex viscoelastic responses under various\u0000non-equilibrium flows.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier Detection with Cluster Catch Digraphs 利用群集捕捉图谱检测离群点
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11596
Rui Shi, Nedret Billor, Elvan Ceyhan
{"title":"Outlier Detection with Cluster Catch Digraphs","authors":"Rui Shi, Nedret Billor, Elvan Ceyhan","doi":"arxiv-2409.11596","DOIUrl":"https://doi.org/arxiv-2409.11596","url":null,"abstract":"This paper introduces a novel family of outlier detection algorithms based on\u0000Cluster Catch Digraphs (CCDs), specifically tailored to address the challenges\u0000of high dimensionality and varying cluster shapes, which deteriorate the\u0000performance of most traditional outlier detection methods. We propose the\u0000Uniformity-Based CCD with Mutual Catch Graph (U-MCCD), the Uniformity- and\u0000Neighbor-Based CCD with Mutual Catch Graph (UN-MCCD), and their shape-adaptive\u0000variants (SU-MCCD and SUN-MCCD), which are designed to detect outliers in data\u0000sets with arbitrary cluster shapes and high dimensions. We present the\u0000advantages and shortcomings of these algorithms and provide the motivation or\u0000need to define each particular algorithm. Through comprehensive Monte Carlo\u0000simulations, we assess their performance and demonstrate the robustness and\u0000effectiveness of our algorithms across various settings and contamination\u0000levels. We also illustrate the use of our algorithms on various real-life data\u0000sets. The U-MCCD algorithm efficiently identifies outliers while maintaining\u0000high true negative rates, and the SU-MCCD algorithm shows substantial\u0000improvement in handling non-uniform clusters. Additionally, the UN-MCCD and\u0000SUN-MCCD algorithms address the limitations of existing methods in\u0000high-dimensional spaces by utilizing Nearest Neighbor Distances (NND) for\u0000clustering and outlier detection. Our results indicate that these novel\u0000algorithms offer substantial advancements in the accuracy and adaptability of\u0000outlier detection, providing a valuable tool for various real-world\u0000applications. Keyword: Outlier detection, Graph-based clustering, Cluster catch digraphs,\u0000$k$-nearest-neighborhood, Mutual catch graphs, Nearest neighbor distance.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Sample Complexity of Smooth Boosting and the Tightness of the Hardcore Theorem 平滑提升的采样复杂性与硬核定理的严密性
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11597
Guy Blanc, Alexandre Hayderi, Caleb Koch, Li-Yang Tan
{"title":"The Sample Complexity of Smooth Boosting and the Tightness of the Hardcore Theorem","authors":"Guy Blanc, Alexandre Hayderi, Caleb Koch, Li-Yang Tan","doi":"arxiv-2409.11597","DOIUrl":"https://doi.org/arxiv-2409.11597","url":null,"abstract":"Smooth boosters generate distributions that do not place too much weight on\u0000any given example. Originally introduced for their noise-tolerant properties,\u0000such boosters have also found applications in differential privacy,\u0000reproducibility, and quantum learning theory. We study and settle the sample\u0000complexity of smooth boosting: we exhibit a class that can be weak learned to\u0000$gamma$-advantage over smooth distributions with $m$ samples, for which strong\u0000learning over the uniform distribution requires\u0000$tilde{Omega}(1/gamma^2)cdot m$ samples. This matches the overhead of\u0000existing smooth boosters and provides the first separation from the setting of\u0000distribution-independent boosting, for which the corresponding overhead is\u0000$O(1/gamma)$. Our work also sheds new light on Impagliazzo's hardcore theorem from\u0000complexity theory, all known proofs of which can be cast in the framework of\u0000smooth boosting. For a function $f$ that is mildly hard against size-$s$\u0000circuits, the hardcore theorem provides a set of inputs on which $f$ is\u0000extremely hard against size-$s'$ circuits. A downside of this important result\u0000is the loss in circuit size, i.e. that $s' ll s$. Answering a question of\u0000Trevisan, we show that this size loss is necessary and in fact, the parameters\u0000achieved by known proofs are the best possible.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractional Naive Bayes (FNB): non-convex optimization for a parsimonious weighted selective naive Bayes classifier 分数奈维贝叶 (FNB):针对简明加权选择性奈维贝叶分类器的非凸优化
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11100
Carine Hue, Marc Boullé
{"title":"Fractional Naive Bayes (FNB): non-convex optimization for a parsimonious weighted selective naive Bayes classifier","authors":"Carine Hue, Marc Boullé","doi":"arxiv-2409.11100","DOIUrl":"https://doi.org/arxiv-2409.11100","url":null,"abstract":"We study supervised classification for datasets with a very large number of\u0000input variables. The na\"ive Bayes classifier is attractive for its simplicity,\u0000scalability and effectiveness in many real data applications. When the strong\u0000na\"ive Bayes assumption of conditional independence of the input variables\u0000given the target variable is not valid, variable selection and model averaging\u0000are two common ways to improve the performance. In the case of the na\"ive\u0000Bayes classifier, the resulting weighting scheme on the models reduces to a\u0000weighting scheme on the variables. Here we focus on direct estimation of\u0000variable weights in such a weighted na\"ive Bayes classifier. We propose a\u0000sparse regularization of the model log-likelihood, which takes into account\u0000prior penalization costs related to each input variable. Compared to averaging\u0000based classifiers used up until now, our main goal is to obtain parsimonious\u0000robust models with less variables and equivalent performance. The direct\u0000estimation of the variable weights amounts to a non-convex optimization problem\u0000for which we propose and compare several two-stage algorithms. First, the\u0000criterion obtained by convex relaxation is minimized using several variants of\u0000standard gradient methods. Then, the initial non-convex optimization problem is\u0000solved using local optimization methods initialized with the result of the\u0000first stage. The various proposed algorithms result in optimization-based\u0000weighted na\"ive Bayes classifiers, that are evaluated on benchmark datasets\u0000and positioned w.r.t. to a reference averaging-based classifier.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partially Observable Contextual Bandits with Linear Payoffs 线性报酬的部分可观测情境强盗游戏
arXiv - STAT - Machine Learning Pub Date : 2024-09-17 DOI: arxiv-2409.11521
Sihan Zeng, Sujay Bhatt, Alec Koppel, Sumitra Ganesh
{"title":"Partially Observable Contextual Bandits with Linear Payoffs","authors":"Sihan Zeng, Sujay Bhatt, Alec Koppel, Sumitra Ganesh","doi":"arxiv-2409.11521","DOIUrl":"https://doi.org/arxiv-2409.11521","url":null,"abstract":"The standard contextual bandit framework assumes fully observable and\u0000actionable contexts. In this work, we consider a new bandit setting with\u0000partially observable, correlated contexts and linear payoffs, motivated by the\u0000applications in finance where decision making is based on market information\u0000that typically displays temporal correlation and is not fully observed. We make\u0000the following contributions marrying ideas from statistical signal processing\u0000with bandits: (i) We propose an algorithmic pipeline named EMKF-Bandit, which\u0000integrates system identification, filtering, and classic contextual bandit\u0000algorithms into an iterative method alternating between latent parameter\u0000estimation and decision making. (ii) We analyze EMKF-Bandit when we select\u0000Thompson sampling as the bandit algorithm and show that it incurs a sub-linear\u0000regret under conditions on filtering. (iii) We conduct numerical simulations\u0000that demonstrate the benefits and practical applicability of the proposed\u0000pipeline.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信