arXiv - CS - Machine Learning最新文献

筛选
英文 中文
Reinforcement Learning as an Improvement Heuristic for Real-World Production Scheduling 强化学习作为改进现实世界生产调度的启发式方法
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11933
Arthur Müller, Lukas Vollenkemper
{"title":"Reinforcement Learning as an Improvement Heuristic for Real-World Production Scheduling","authors":"Arthur Müller, Lukas Vollenkemper","doi":"arxiv-2409.11933","DOIUrl":"https://doi.org/arxiv-2409.11933","url":null,"abstract":"The integration of Reinforcement Learning (RL) with heuristic methods is an\u0000emerging trend for solving optimization problems, which leverages RL's ability\u0000to learn from the data generated during the search process. One promising\u0000approach is to train an RL agent as an improvement heuristic, starting with a\u0000suboptimal solution that is iteratively improved by applying small changes. We\u0000apply this approach to a real-world multiobjective production scheduling\u0000problem. Our approach utilizes a network architecture that includes Transformer\u0000encoding to learn the relationships between jobs. Afterwards, a probability\u0000matrix is generated from which pairs of jobs are sampled and then swapped to\u0000improve the solution. We benchmarked our approach against other heuristics\u0000using real data from our industry partner, demonstrating its superior\u0000performance.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Machine Learning Approach to Traffic Accident Fatality Prediction 交通事故死亡率预测的可解释机器学习方法
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11929
Md. Asif Khan Rifat, Ahmedul Kabir, Armana Sabiha Huq
{"title":"An Explainable Machine Learning Approach to Traffic Accident Fatality Prediction","authors":"Md. Asif Khan Rifat, Ahmedul Kabir, Armana Sabiha Huq","doi":"arxiv-2409.11929","DOIUrl":"https://doi.org/arxiv-2409.11929","url":null,"abstract":"Road traffic accidents (RTA) pose a significant public health threat\u0000worldwide, leading to considerable loss of life and economic burdens. This is\u0000particularly acute in developing countries like Bangladesh. Building reliable\u0000models to forecast crash outcomes is crucial for implementing effective\u0000preventive measures. To aid in developing targeted safety interventions, this\u0000study presents a machine learning-based approach for classifying fatal and\u0000non-fatal road accident outcomes using data from the Dhaka metropolitan traffic\u0000crash database from 2017 to 2022. Our framework utilizes a range of machine\u0000learning classification algorithms, comprising Logistic Regression, Support\u0000Vector Machines, Naive Bayes, Random Forest, Decision Tree, Gradient Boosting,\u0000LightGBM, and Artificial Neural Network. We prioritize model interpretability\u0000by employing the SHAP (SHapley Additive exPlanations) method, which elucidates\u0000the key factors influencing accident fatality. Our results demonstrate that\u0000LightGBM outperforms other models, achieving a ROC-AUC score of 0.72. The\u0000global, local, and feature dependency analyses are conducted to acquire deeper\u0000insights into the behavior of the model. SHAP analysis reveals that casualty\u0000class, time of accident, location, vehicle type, and road type play pivotal\u0000roles in determining fatality risk. These findings offer valuable insights for\u0000policymakers and road safety practitioners in developing countries, enabling\u0000the implementation of evidence-based strategies to reduce traffic crash\u0000fatalities.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient wavelet-based physics-informed neural networks for singularly perturbed problems 用于奇异扰动问题的高效小波物理信息神经网络
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11847
Himanshu Pandey, Anshima Singh, Ratikanta Behera
{"title":"An efficient wavelet-based physics-informed neural networks for singularly perturbed problems","authors":"Himanshu Pandey, Anshima Singh, Ratikanta Behera","doi":"arxiv-2409.11847","DOIUrl":"https://doi.org/arxiv-2409.11847","url":null,"abstract":"Physics-informed neural networks (PINNs) are a class of deep learning models\u0000that utilize physics as differential equations to address complex problems,\u0000including ones that may involve limited data availability. However, tackling\u0000solutions of differential equations with oscillations or singular perturbations\u0000and shock-like structures becomes challenging for PINNs. Considering these\u0000challenges, we designed an efficient wavelet-based PINNs (W-PINNs) model to\u0000solve singularly perturbed differential equations. Here, we represent the\u0000solution in wavelet space using a family of smooth-compactly supported\u0000wavelets. This framework represents the solution of a differential equation\u0000with significantly fewer degrees of freedom while still retaining in capturing,\u0000identifying, and analyzing the local structure of complex physical phenomena.\u0000The architecture allows the training process to search for a solution within\u0000wavelet space, making the process faster and more accurate. The proposed model\u0000does not rely on automatic differentiations for derivatives involved in\u0000differential equations and does not require any prior information regarding the\u0000behavior of the solution, such as the location of abrupt features. Thus,\u0000through a strategic fusion of wavelets with PINNs, W-PINNs excel at capturing\u0000localized nonlinear information, making them well-suited for problems showing\u0000abrupt behavior in certain regions, such as singularly perturbed problems. The\u0000efficiency and accuracy of the proposed neural network model are demonstrated\u0000in various test problems, i.e., highly singularly perturbed nonlinear\u0000differential equations, the FitzHugh-Nagumo (FHN), and Predator-prey\u0000interaction models. The proposed design model exhibits impressive comparisons\u0000with traditional PINNs and the recently developed wavelet-based PINNs, which\u0000use wavelets as an activation function for solving nonlinear differential\u0000equations.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features 具有任意特征的线性时差学习的几乎确定收敛性
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.12135
Jiuqi Wang, Shangtong Zhang
{"title":"Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features","authors":"Jiuqi Wang, Shangtong Zhang","doi":"arxiv-2409.12135","DOIUrl":"https://doi.org/arxiv-2409.12135","url":null,"abstract":"Temporal difference (TD) learning with linear function approximation,\u0000abbreviated as linear TD, is a classic and powerful prediction algorithm in\u0000reinforcement learning. While it is well understood that linear TD converges\u0000almost surely to a unique point, this convergence traditionally requires the\u0000assumption that the features used by the approximator are linearly independent.\u0000However, this linear independence assumption does not hold in many practical\u0000scenarios. This work is the first to establish the almost sure convergence of\u0000linear TD without requiring linearly independent features. In fact, we do not\u0000make any assumptions on the features. We prove that the approximated value\u0000function converges to a unique point and the weight iterates converge to a set.\u0000We also establish a notion of local stability of the weight iterates.\u0000Importantly, we do not need to introduce any other additional assumptions and\u0000do not need to make any modification to the linear TD algorithm. Key to our\u0000analysis is a novel characterization of bounded invariant sets of the mean ODE\u0000of linear TD.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"205 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview 量化大规模模型的艺术与科学:全面概述
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11650
Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li Weitao
{"title":"Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview","authors":"Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li Weitao","doi":"arxiv-2409.11650","DOIUrl":"https://doi.org/arxiv-2409.11650","url":null,"abstract":"This paper provides a comprehensive overview of the principles, challenges,\u0000and methodologies associated with quantizing large-scale neural network models.\u0000As neural networks have evolved towards larger and more complex architectures\u0000to address increasingly sophisticated tasks, the computational and energy costs\u0000have escalated significantly. We explore the necessity and impact of model size\u0000growth, highlighting the performance benefits as well as the computational\u0000challenges and environmental considerations. The core focus is on model\u0000quantization as a fundamental approach to mitigate these challenges by reducing\u0000model size and improving efficiency without substantially compromising\u0000accuracy. We delve into various quantization techniques, including both\u0000post-training quantization (PTQ) and quantization-aware training (QAT), and\u0000analyze several state-of-the-art algorithms such as LLM-QAT, PEQA(L4Q),\u0000ZeroQuant, SmoothQuant, and others. Through comparative analysis, we examine\u0000how these methods address issues like outliers, importance weighting, and\u0000activation quantization, ultimately contributing to more sustainable and\u0000accessible deployment of large-scale models.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monomial Matrix Group Equivariant Neural Functional Networks 单项矩阵组等变量神经功能网络
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11697
Hoang V. Tran, Thieu N. Vo, Tho H. Tran, An T. Nguyen, Tan Minh Nguyen
{"title":"Monomial Matrix Group Equivariant Neural Functional Networks","authors":"Hoang V. Tran, Thieu N. Vo, Tho H. Tran, An T. Nguyen, Tan Minh Nguyen","doi":"arxiv-2409.11697","DOIUrl":"https://doi.org/arxiv-2409.11697","url":null,"abstract":"Neural functional networks (NFNs) have recently gained significant attention\u0000due to their diverse applications, ranging from predicting network\u0000generalization and network editing to classifying implicit neural\u0000representation. Previous NFN designs often depend on permutation symmetries in\u0000neural networks' weights, which traditionally arise from the unordered\u0000arrangement of neurons in hidden layers. However, these designs do not take\u0000into account the weight scaling symmetries of $operatorname{ReLU}$ networks,\u0000and the weight sign flipping symmetries of $operatorname{sin}$ or\u0000$operatorname{tanh}$ networks. In this paper, we extend the study of the group\u0000action on the network weights from the group of permutation matrices to the\u0000group of monomial matrices by incorporating scaling/sign-flipping symmetries.\u0000Particularly, we encode these scaling/sign-flipping symmetries by designing our\u0000corresponding equivariant and invariant layers. We name our new family of NFNs\u0000the Monomial Matrix Group Equivariant Neural Functional Networks\u0000(Monomial-NFN). Because of the expansion of the symmetries, Monomial-NFN has\u0000much fewer independent trainable parameters compared to the baseline NFNs in\u0000the literature, thus enhancing the model's efficiency. Moreover, for fully\u0000connected and convolutional neural networks, we theoretically prove that all\u0000groups that leave these networks invariant while acting on their weight spaces\u0000are some subgroups of the monomial matrix group. We provide empirical evidences\u0000to demonstrate the advantages of our model over existing baselines, achieving\u0000competitive performance and efficiency.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tight and Efficient Upper Bound on Spectral Norm of Convolutional Layers 卷积层谱规范的严密而高效的上界
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11859
Ekaterina Grishina, Mikhail Gorbunov, Maxim Rakhuba
{"title":"Tight and Efficient Upper Bound on Spectral Norm of Convolutional Layers","authors":"Ekaterina Grishina, Mikhail Gorbunov, Maxim Rakhuba","doi":"arxiv-2409.11859","DOIUrl":"https://doi.org/arxiv-2409.11859","url":null,"abstract":"Controlling the spectral norm of the Jacobian matrix, which is related to the\u0000convolution operation, has been shown to improve generalization, training\u0000stability and robustness in CNNs. Existing methods for computing the norm\u0000either tend to overestimate it or their performance may deteriorate quickly\u0000with increasing the input and kernel sizes. In this paper, we demonstrate that\u0000the tensor version of the spectral norm of a four-dimensional convolution\u0000kernel, up to a constant factor, serves as an upper bound for the spectral norm\u0000of the Jacobian matrix associated with the convolution operation. This new\u0000upper bound is independent of the input image resolution, differentiable and\u0000can be efficiently calculated during training. Through experiments, we\u0000demonstrate how this new bound can be used to improve the performance of\u0000convolutional architectures.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes 拓扑深度学习与状态空间模型:简约复合物的曼巴方法
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.12033
Marco Montagna, Simone Scardapane, Lev Telyatnikov
{"title":"Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes","authors":"Marco Montagna, Simone Scardapane, Lev Telyatnikov","doi":"arxiv-2409.12033","DOIUrl":"https://doi.org/arxiv-2409.12033","url":null,"abstract":"Graph Neural Networks based on the message-passing (MP) mechanism are a\u0000dominant approach for handling graph-structured data. However, they are\u0000inherently limited to modeling only pairwise interactions, making it difficult\u0000to explicitly capture the complexity of systems with $n$-body relations. To\u0000address this, topological deep learning has emerged as a promising field for\u0000studying and modeling higher-order interactions using various topological\u0000domains, such as simplicial and cellular complexes. While these new domains\u0000provide powerful representations, they introduce new challenges, such as\u0000effectively modeling the interactions among higher-order structures through\u0000higher-order MP. Meanwhile, structured state-space sequence models have proven\u0000to be effective for sequence modeling and have recently been adapted for graph\u0000data by encoding the neighborhood of a node as a sequence, thereby avoiding the\u0000MP mechanism. In this work, we propose a novel architecture designed to operate\u0000with simplicial complexes, utilizing the Mamba state-space model as its\u0000backbone. Our approach generates sequences for the nodes based on the\u0000neighboring cells, enabling direct communication between all higher-order\u0000structures, regardless of their rank. We extensively validate our model,\u0000demonstrating that it achieves competitive performance compared to\u0000state-of-the-art models developed for simplicial complexes.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Model-Agnostic Approach for Uncertainty Estimation in Data-Restricted Pedometric Applications 在数据受限的计步应用中进行不确定性估计的高效模型诊断方法
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.11985
Viacheslav Barkov, Jonas Schmidinger, Robin Gebbers, Martin Atzmueller
{"title":"An Efficient Model-Agnostic Approach for Uncertainty Estimation in Data-Restricted Pedometric Applications","authors":"Viacheslav Barkov, Jonas Schmidinger, Robin Gebbers, Martin Atzmueller","doi":"arxiv-2409.11985","DOIUrl":"https://doi.org/arxiv-2409.11985","url":null,"abstract":"This paper introduces a model-agnostic approach designed to enhance\u0000uncertainty estimation in the predictive modeling of soil properties, a crucial\u0000factor for advancing pedometrics and the practice of digital soil mapping. For\u0000addressing the typical challenge of data scarcity in soil studies, we present\u0000an improved technique for uncertainty estimation. This method is based on the\u0000transformation of regression tasks into classification problems, which not only\u0000allows for the production of reliable uncertainty estimates but also enables\u0000the application of established machine learning algorithms with competitive\u0000performance that have not yet been utilized in pedometrics. Empirical results\u0000from datasets collected from two German agricultural fields showcase the\u0000practical application of the proposed methodology. Our results and findings\u0000suggest that the proposed approach has the potential to provide better\u0000uncertainty estimation than the models commonly used in pedometrics.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Deep Submodular Functions 扩展深次模态函数
arXiv - CS - Machine Learning Pub Date : 2024-09-18 DOI: arxiv-2409.12053
Seyed Mohammad Hosseini, Arash Jamshid, Seyed Mahdi Noormousavi, Mahdi Jafari Siavoshani, Naeimeh Omidvar
{"title":"Extended Deep Submodular Functions","authors":"Seyed Mohammad Hosseini, Arash Jamshid, Seyed Mahdi Noormousavi, Mahdi Jafari Siavoshani, Naeimeh Omidvar","doi":"arxiv-2409.12053","DOIUrl":"https://doi.org/arxiv-2409.12053","url":null,"abstract":"We introduce a novel category of set functions called Extended Deep\u0000Submodular functions (EDSFs), which are neural network-representable. EDSFs\u0000serve as an extension of Deep Submodular Functions (DSFs), inheriting crucial\u0000properties from DSFs while addressing innate limitations. It is known that DSFs\u0000can represent a limiting subset of submodular functions. In contrast, through\u0000an analysis of polymatroid properties, we establish that EDSFs possess the\u0000capability to represent all monotone submodular functions, a notable\u0000enhancement compared to DSFs. Furthermore, our findings demonstrate that EDSFs\u0000can represent any monotone set function, indicating the family of EDSFs is\u0000equivalent to the family of all monotone set functions. Additionally, we prove\u0000that EDSFs maintain the concavity inherent in DSFs when the components of the\u0000input vector are non-negative real numbers-an essential feature in certain\u0000combinatorial optimization problems. Through extensive experiments, we\u0000illustrate that EDSFs exhibit significantly lower empirical generalization\u0000error than DSFs in the learning of coverage functions. This suggests that EDSFs\u0000present a promising advancement in the representation and learning of set\u0000functions with improved generalization capabilities.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信