J. Data Intell.最新文献

筛选
英文 中文
Hybrid Metadata Classification in Large-scale Structured Datasets 大型结构化数据集的混合元数据分类
J. Data Intell. Pub Date : 2022-11-01 DOI: 10.26421/jdi3.4-4
Sophie Pavia, Nick Piraino, Kazi Islam, A. Pyayt, M. Gubanov
{"title":"Hybrid Metadata Classification in Large-scale Structured Datasets","authors":"Sophie Pavia, Nick Piraino, Kazi Islam, A. Pyayt, M. Gubanov","doi":"10.26421/jdi3.4-4","DOIUrl":"https://doi.org/10.26421/jdi3.4-4","url":null,"abstract":"Metadata location and classification is an important problem for large-scale structured datasets. For example, Web tables cite{wt_corpus} have hundreds of millions of tables, but often have missing or incorrect labels for rows (or columns) with attribute names. Such errors cite{wtitles} significantly complicate all data management tasks such as {em query processing, data integration, indexing}, etc. Different sources or authors position metadata rows/columns differently inside a table, which makes its reliable identification challenging.In this work we describe our scalable, hybrid two-layer Deep- and Machine-learning based ensemble, combining Long Short Term Memory (LSTM) and Naive Bayes Classifier to accurately identify Metadata-containing rows or columns in a table. We have performed an extensive evaluation on several datasets, including an ultra large-scale dataset containing more than 15 million tables coming from more than 26 thousands of sources to justify scalability and resistance to variety, stemming from a large number of sources. We observed superiority of this two-layer ensemble, compared to the recent previous approaches and report an impressive 95.73text{%} accuracy at scale with our ensemble model using regular LSTM.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124196280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Matching Large Biomedical Ontologies Using Symbolic Regression Using Symbolic Regression 使用符号回归匹配大型生物医学本体
J. Data Intell. Pub Date : 2022-08-01 DOI: 10.26421/jdi3.3-2
J. Martinez-Gil, Shaoyi Yin, Josef Kung, F. Morvan
{"title":"Matching Large Biomedical Ontologies Using Symbolic Regression Using Symbolic Regression","authors":"J. Martinez-Gil, Shaoyi Yin, Josef Kung, F. Morvan","doi":"10.26421/jdi3.3-2","DOIUrl":"https://doi.org/10.26421/jdi3.3-2","url":null,"abstract":"The problem of ontology matching consists of finding the semantic correspondences between two ontologies that, although belonging to the same domain, have been developed separately. Ontology matching methods are of great importance today since they allow us to find the pivot points from which an automatic data integration process can be established. Unlike the most recent developments based on deep learning, this study presents our research efforts on the development of novel methods for ontology matching that are accurate and interpretable at the same time. For this purpose, we rely on a symbolic regression model (implemented via genetic programming) that has been specifically trained to find the mathematical expression that can solve the ground truth provided by experts accurately. Moreover, our approach offers the possibility of being understood by a human operator and helping the processor to consume as little energy as possible. The experimental evaluation results that we have achieved using several benchmark datasets seem to show that our approach could be promising.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"41 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132064442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simplified Specification of Data Requirements for Demand-Actuated Big Data Refinement 需求驱动型大数据精细化数据需求简化规范
J. Data Intell. Pub Date : 2022-08-01 DOI: 10.26421/jdi3.3-5
Christoph Stach, Julia Bräcker, Rebecca Eichler, Corinna Giebler, B. Mitschang
{"title":"Simplified Specification of Data Requirements for Demand-Actuated Big Data Refinement","authors":"Christoph Stach, Julia Bräcker, Rebecca Eichler, Corinna Giebler, B. Mitschang","doi":"10.26421/jdi3.3-5","DOIUrl":"https://doi.org/10.26421/jdi3.3-5","url":null,"abstract":"Data have become one of the most valuable resources in modern society. Due to increasing digitalization and the growing prevalence of the Internet of Things, it is possible to capture data on any aspect of today's life. Similar to physical resources, data have to be refined before they can become a profitable asset. However, such data preparation entails completely novel challenges: For instance, data are not consumed when being processed, whereby the volume of available data that needs to be managed increases steadily. Furthermore, the data preparation has to be tailored to the intended use case in order to achieve an optimal outcome. This, however, requires the knowledge of domain experts. Since such experts are typically not IT experts, they need tools that enable them to specify the data requirements of their use cases in a user-friendly manner. The goal of this data preparation is to provide any emerging use case with demand-actuated data.}{With this in mind, we designed a tailorable data preparation zone for Data Lakes called BARENTS@. It provides a simplified method for domain experts to specify how data must be pre-processed for their use cases, and these data preparation steps are then applied automatically. The data requirements are specified by means of an ontology-based method which is comprehensible to non-IT experts. Data preparation and provisioning are realized resource-efficient by implementing BARENTS as a dedicated zone for Data Lakes. This way, BARENTS is seamlessly embeddable into established Big Data infrastructures.}{This article is an extended and revised version of the conference paper ``Demand-Driven Data Provisioning in Data Lakes: BARENTS,---,A Tailorable Data Preparation Zone'' by Stach~et~al.~cite{Stach2021}. In comparison to our original conference paper, we take a more detailed look at related work in the paper at hand. The emphasis of this extended and revised version, however, is on strategies to improve the performance of BARENTS and enhance its functionality. To this end, we discuss in-depth implementation details of our prototype and introduce a novel recommender system in BARENTS that assists users in specifying data preparation steps.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133148961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutionary DRL Environment: Transfer Learning-Based Genetic Algorithm 进化DRL环境:基于迁移学习的遗传算法
J. Data Intell. Pub Date : 2022-08-01 DOI: 10.26421/jdi3.3-3
Badr Hirchoua, Imadeddine Mountasser, B. Ouhbi, B. Frikh
{"title":"Evolutionary DRL Environment: Transfer Learning-Based Genetic Algorithm","authors":"Badr Hirchoua, Imadeddine Mountasser, B. Ouhbi, B. Frikh","doi":"10.26421/jdi3.3-3","DOIUrl":"https://doi.org/10.26421/jdi3.3-3","url":null,"abstract":"Stock markets trading has risen as a critical challenge for artificial intelligence research. The way stock markets are moving and changing pushes researchers to find more sophisticated algorithms and strategies to anticipate the market movement and changes. From the artificial intelligence perspective, such environments require artificial agents to coordinate and transfer their best experience through different generations of agents. However, the existing agents are trained using hand-crafted expert features and expert capabilities. Notwithstanding these refinements, no previous single system has come near to dominating the trading environment. We address the algorithmic trading problem utilising an evolutive learning method. Precisely, we train a multi-agent reinforcement learning algorithm that uses only self trades generated by different generations of agents. The evolution-based genetic algorithm operates as an evolutive environment that continually adapts the agent's internal strategies and tactics. Also, it pushes the system forward to generate creative behaviours for the next generations. Additionally, a deep recurrent neural network drives the mutation mechanism through the attention that dynamically encodes the memory mutation size. The winner, which is the last agent, achieved promising performances and surpassed traditional and intelligent baselines.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114993922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Label Redundancy in Crowdsourcing for Training Machine Learning Models 标签冗余在众包训练机器学习模型中的影响
J. Data Intell. Pub Date : 2022-08-01 DOI: 10.26421/jdi3.3-1
Ayame Shimizu, Kei Wakabayashi
{"title":"Effect of Label Redundancy in Crowdsourcing for Training Machine Learning Models","authors":"Ayame Shimizu, Kei Wakabayashi","doi":"10.26421/jdi3.3-1","DOIUrl":"https://doi.org/10.26421/jdi3.3-1","url":null,"abstract":"Crowdsourcing is widely utilized for collecting labeled examples to train supervised machine learning models, but the labels obtained from workers are considerably noisier than those from expert annotators. To address the noisy label issue, most researchers adopt the repeated labeling strategy, where multiple (redundant) labels are collected for each example and then aggregated. Although this improves the annotation quality, it decreases the amount of training data when the budget for crowdsourcing is limited, which is a negative factor in terms of the accuracy of the machine learning model to be trained. This paper empirically examines the extent to which repeated labeling contributes to the accuracy of machine learning models for image classification, named entity recognition and sentiment analysis under various conditions of budget and worker quality. We experimentally examined four hypotheses related to the effect of budget, worker quality, task difficulty, and redundancy on crowdsourcing. The results on image classification and named entity recognition supported all four hypotheses and suggested that repeated labeling almost always has a negative impact on machine learning when it comes to accuracy. Somewhat surprisingly, the results on sentiment analysis using pretrained models did not support the hypothesis which shows the possibility of remaining utilization of multiple-labeling.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131238158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Legal Party Extraction from Legal Opinion Texts Using Recurrent Deep Neural Networks 基于递归深度神经网络的法律意见书当事人提取
J. Data Intell. Pub Date : 2022-08-01 DOI: 10.26421/jdi3.3-4
Chamodi Samarawickrama, Melonie de Almeida, Nisansa de Silva, Gathika Ratnayaka, A. Perera
{"title":"Legal Party Extraction from Legal Opinion Texts Using Recurrent Deep Neural Networks","authors":"Chamodi Samarawickrama, Melonie de Almeida, Nisansa de Silva, Gathika Ratnayaka, A. Perera","doi":"10.26421/jdi3.3-4","DOIUrl":"https://doi.org/10.26421/jdi3.3-4","url":null,"abstract":"Since the advent of deep learning based Natural Language Processing (NLP), diverse domains of human society have benefited form automation and the resultant increment in efficiency. Law and order are, undoubtedly, crucial for the proper functioning of society; for without law there would be chaos, failing to offer equality to everyone. The legal domain being such a vital field, the incorporation of NLP into its workings has drawn attention in many research studies. This study attempts to leverage NLP into the task of extracting legal parties from legal opinion text documents. This task is of high importance given the significance of existing legal cases on contemporary cases under the legal practice, textit{case law}. This study proposes a novel deep learning methodology which can be effectively used to resolve the problem of identifying legal party members in legal documents. We present two models here, where the first is a BRNN model to detect whether an entity is a legal party or not, and a second, a modification of the same neural network, to classify the thus identified entities into petitioner and defendant classes. Furthermore, in this study, we introduce a novel data set which is annotated with legal party information by an expert in the legal domain. With the use of the said dataset, we have trained and evaluated our models where the experiments carried out support satisfactory performance of our solution. The deep learning model we hereby propose, provides a benchmark for the legal party identification task on this data set. Evaluations for the solution presented in the paper show that our system has 90.89% precision and 91.69% recall for legal party extraction from an unseen paragraph from a legal document. As for the classification of petitioners and defendants, we show that GRU-512 obtains the highest F1 score.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134428733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Impact of Data Completeness and Correctness on Explainable Machine Learning Models 数据完整性和正确性对可解释机器学习模型的影响
J. Data Intell. Pub Date : 2022-05-01 DOI: 10.26421/jdi3.2-2
Shelernaz Azimi, C. Pahl
{"title":"The Impact of Data Completeness and Correctness on Explainable Machine Learning Models","authors":"Shelernaz Azimi, C. Pahl","doi":"10.26421/jdi3.2-2","DOIUrl":"https://doi.org/10.26421/jdi3.2-2","url":null,"abstract":"Many systems in the Edge Cloud, the Internet-of-Things or Cyber-Physical Systems are built for processing data, which is delivered from sensors and devices, transported, processed and consumed locally by actuators. This, given the regularly high volume of data, permits Artificial Intelligence (AI) strategies like Machine Learning (ML) to be used to generate the application and management functions needed. The quality of both source data and machine learning model is here unavoidably of high significance, yet has not been explored sufficiently as an explicit connection of the ML model quality that are created through ML procedures to the quality of data that the model functions consume in their construction. Here, we investigated the link between input data quality for ML function construction and the quality of these functions in data-driven software systems towards explainable model construction through an experimental approach with IoT data using decision trees.We have 3 objectives in this research: 1. Search for indicators that influence data quality such as correctness and completeness and model construction factors on accuracy, precision and recall. 2. Estimate the impact of variations in model construction and data quality. 3. Identify change patterns that can be attributed to specific input changes. This ultimately aims to support {em explainable AI}, i.e., the better understanding of how ML models work and what impacts on their quality.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115529251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What Makes a Freemium Game Player Become a Paying Player 是什么让免费游戏玩家变成付费玩家
J. Data Intell. Pub Date : 2022-05-01 DOI: 10.26421/jdi3.2-1
Sandra Boric, Chris L. Strauss
{"title":"What Makes a Freemium Game Player Become a Paying Player","authors":"Sandra Boric, Chris L. Strauss","doi":"10.26421/jdi3.2-1","DOIUrl":"https://doi.org/10.26421/jdi3.2-1","url":null,"abstract":"This paper presents a derivation of freemium game players’ playing and paying motivations and demographic attributes by aggregating the results of 17 studies. For further characterization and a clear distinction from other gamer subgroups, this paper also contains an aggregation of playing motivations and demographic attributes of video game players in general, and of non-freemium game players. Our results suggest that socialization and competition are common motivations for playing a freemium game, and we derive enjoyment to be a particularly important playing motivation for freemium games. We further find that freemium game players who proceed to pay particularly name economic factors and applied, freemium game-specific mechanisms as motivations. Regarding demographics, while the studies which were analyzed to derive freemium gamers’ playing motivations have a dominance of female participants, the studies which were analyzed to derive freemium gamers’ paying motivations have mainly male participants. For analyses by both motivations and demographic attributes, we suggest a more differentiated picture including genre and platform considerations. For marketers and developers, we suggest a differentiation between markets, a mechanism transparency, and an emphasis on socialization in freemium games.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116093355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting Experiment Statistics, Conditions, and Topics from Scientific Papers with STEREO 用STEREO从科学论文中提取实验统计、条件和主题
J. Data Intell. Pub Date : 2022-05-01 DOI: 10.26421/jdi3.2-4
S. Epp, Michael J. Hoffmann, N. Lell, M. Mohr, A. Scherp
{"title":"Extracting Experiment Statistics, Conditions, and Topics from Scientific Papers with STEREO","authors":"S. Epp, Michael J. Hoffmann, N. Lell, M. Mohr, A. Scherp","doi":"10.26421/jdi3.2-4","DOIUrl":"https://doi.org/10.26421/jdi3.2-4","url":null,"abstract":"We address the problem of extracting reports of statistics along with information about the experiment conditions and experiment topics from scientific publications. A common writing style for statistical results are the recommendations of the American Psychology Association (APA). In practice, writing styles vary as reports are not 100% following APA-style or parameters are not reported despite being mandatory. In addition, the statistics are not reported in isolation but in context of experiment conditions investigated and the general experiment topic. We address these challenges by proposing a flexible pipeline STEREO based on wrapper induction and unsupervised aspect detection to extract experiment statistics, conditions, and topics. Thus, in contrast to existing rule-based tools like statcheck with a pre-defined set of rules, we learn rules via induction. Hierarchical wrapper induction is applied to learn rules to extract the reported statistics. Challenge here is to apply wrapper induction on an information extraction task without having formatting landmarks as they can be exploited in HTML pages. Result of step 1 is a set of extracted statistic reports together with sentences in which the reports were found. This is used as input to step 2 of STEREO, which has two parts. We extract experiment conditions using a grammar-based wrapper. Furthermore, we identify the experiment topic using an unsupervised attention-based aspect extraction approach adapted to our problem domain. We applied our pipeline to the over 100,000 documents in the CORD-19 dataset. It required only 0.25% of the CORD-19 corpus (about 500 documents) to learn statistics extraction rules that cover 95% of the sentences in CORD-19. The statistic extraction has 100% precision on APA-conform statistics, which is identical with statcheck. In addition, STEREO can extract non-APA writing styles with 95% precision, which statcheck does not support. Extracting non-APA conform statistics is important as they make more than 99% of all $113$k extracted statistics. We could extract in 46% the correct conditions from APA-conform reports (30% for non-APA). The best model for topic extraction achieves a precision of 75% on statistics reported in APA style $73% for non-APA conform). We conclude that STEREO is a good foundation for automatic statistic extraction and future developments for scientific paper analysis. Particularly the extraction of non-APA conform reports is important and allows applications such as giving feedback to authors about what is missing and could be changed. Finally, STEREO complements existing metadata extraction tools and can be integrated in a general scientific paper analysis pipeline.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128863504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Digital Platform for Sharing Collective Human Hearing 人类集体听觉共享的数字平台
J. Data Intell. Pub Date : 2022-05-01 DOI: 10.26421/jdi3.2-3
Risa Kimura, Tatsuoki Nakajima
{"title":"A Digital Platform for Sharing Collective Human Hearing","authors":"Risa Kimura, Tatsuoki Nakajima","doi":"10.26421/jdi3.2-3","DOIUrl":"https://doi.org/10.26421/jdi3.2-3","url":null,"abstract":"People are hearing various natural and artificial sounds, but it is hard to imagine how hearing the sounds from people collectively can be effectively used in our everyday lives if those sounds become sharable. The sharing economy, which uses digital technologies to share a variety of physical resources in a peer-to-peer manner, has been attracting attention in recent years. Investigating the feasibility of sharing human hearing offers promising opportunities with which to expand the current scope of the sharing economy. In this study, we have developed a digital platform named CollectiveEars to share collective human hearing and explore the opportunities and pitfalls of sharing human physical senses on a digital platform. The first contribution of the study is to present an overview of CollectiveEars. The second contribution is that we reveal opportunities and pitfalls of CollectiveEars by extracting insights from two experiments. The third contribution is to show three examples to extend CollectiveEars.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114472434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信