Data & Knowledge Engineering最新文献

筛选
英文 中文
VarClaMM: A reference meta-model to understand DNA variant classification VarClaMM:了解 DNA 变异分类的参考元模型
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-11-01 DOI: 10.1016/j.datak.2024.102370
Mireia Costa , Alberto García S. , Ana León , Anna Bernasconi , Oscar Pastor
{"title":"VarClaMM: A reference meta-model to understand DNA variant classification","authors":"Mireia Costa ,&nbsp;Alberto García S. ,&nbsp;Ana León ,&nbsp;Anna Bernasconi ,&nbsp;Oscar Pastor","doi":"10.1016/j.datak.2024.102370","DOIUrl":"10.1016/j.datak.2024.102370","url":null,"abstract":"<div><div>Determining the significance of a DNA variant in patients’ health status – a complex process known as <em>variant classification</em> – is highly critical for precision medicine applications. However, there is still debate on how to combine and weigh diverse available evidence to achieve proper and consistent conclusions. Indeed, currently, there are more than 200 different variant classification guidelines available to the scientific community, aiming to establish a framework for standardizing the classification process. Yet, these guidelines are qualitative and vague by nature, hindering their practical application and potential automation. Consequently, more precise definitions are needed.</div><div>In this work, we discuss our efforts to create VarClaMM, a UML meta-model that aims to provide a clear specification of the key concepts involved in variant classification, serving as a common framework for the process. Through this accurate characterization of the domain, we were able to find contradictions or inconsistencies that might have an effect on the classification results. VarClaMM’s conceptualization efforts will lay the ground for the operationalization of variant classification, enabling any potential automation to be based on precise definitions.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102370"},"PeriodicalIF":2.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NoSQL document data migration strategy in the context of schema evolution 模式演进背景下的 NoSQL 文档数据迁移策略
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-11-01 DOI: 10.1016/j.datak.2024.102369
Solomiia Fedushko , Roman Malyi , Yuriy Syerov , Pavlo Serdyuk
{"title":"NoSQL document data migration strategy in the context of schema evolution","authors":"Solomiia Fedushko ,&nbsp;Roman Malyi ,&nbsp;Yuriy Syerov ,&nbsp;Pavlo Serdyuk","doi":"10.1016/j.datak.2024.102369","DOIUrl":"10.1016/j.datak.2024.102369","url":null,"abstract":"<div><div>In Agile development, one approach cannot be chosen and used all the time. Constant updates and strategy changes are necessary. We want to show that combining several migration strategies is better than choosing only one. Also, we emphasize the need to consider the type of schema change. This paper introduces a novel approach designed to optimize the migration process for NoSQL databases. The approach represents a significant advancement in migration strategy planning, providing a quantitative framework to guide decision-making. By incorporating critical factors such as schema changes, database size, the necessity of data in search functionalities, and potential latency issues, the approach comprehensively evaluates the migration feasibility and identifies the optimal migration path. Unlike existing methodologies, this approach adapts to the dynamic nature of NoSQL databases, offering a scalable and flexible approach to migration planning.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102369"},"PeriodicalIF":2.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change pattern relationships in event logs 事件日志中的更改模式关系
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-10-15 DOI: 10.1016/j.datak.2024.102368
Jonas Cremerius, Hendrik Patzlaff, Mathias Weske
{"title":"Change pattern relationships in event logs","authors":"Jonas Cremerius,&nbsp;Hendrik Patzlaff,&nbsp;Mathias Weske","doi":"10.1016/j.datak.2024.102368","DOIUrl":"10.1016/j.datak.2024.102368","url":null,"abstract":"<div><div>Process mining utilises process execution data to discover and analyse business processes. Event logs represent process executions, providing information about the activities executed. In addition to generic event attributes like activity name and timestamp, events might contain domain-specific attributes, such as a blood sugar measurement in a healthcare environment. Many of these values change during a typical process quite frequently. We refer to those as dynamic event attributes. Change patterns can be derived from dynamic event attributes, describing if the attribute values change from one activity to another. So far, change patterns can only be identified in an isolated manner, neglecting the chance of finding co-occuring change patterns. This paper provides an approach to identifying relationships between change patterns by utilising correlation methods from statistics. We applied the proposed technique on two event logs derived from the MIMIC-IV real-world dataset on hospitalisations in the US and evaluated the results with a medical expert. It turns out that relationships between change patterns can be detected within the same directly or eventually follows relation and even beyond that. Further, we identify unexpected relationships that are occurring only at certain parts of the process. Thus, the process perspective reveals novel insights on how dynamic event attributes change together during process execution. The approach is implemented in Python using the PM4Py framework.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102368"},"PeriodicalIF":2.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strategic redesign of business processes in the digital age: A framework 数字时代业务流程的战略再设计:一个框架
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-10-05 DOI: 10.1016/j.datak.2024.102367
Fredrik Milani, Kateryna Kubrak, Juuli Nava
{"title":"Strategic redesign of business processes in the digital age: A framework","authors":"Fredrik Milani,&nbsp;Kateryna Kubrak,&nbsp;Juuli Nava","doi":"10.1016/j.datak.2024.102367","DOIUrl":"10.1016/j.datak.2024.102367","url":null,"abstract":"<div><div>Organizations constantly seek ways to improve their business processes by using digital technologies as enablers. However, simply substituting an existing technology with a new one has limited value compared to using the capabilities of digital technologies to redesign business processes. Therefore, process analysts try to understand how the capabilities of digital technologies can enable the redesign of business processes. In this paper, we conduct a systematic literature review and examine 40 case studies where digital technologies were used to redesign business processes. We identified that, within the context of business process improvement, capabilities of digitalization, communication, analytics, digital representation, and connectivity can enable business process redesign. Furthermore, we note that these capabilities enable applying nine redesign heuristics. Based on our review, we map how each capability can facilitate the implementation of specific redesign heuristics. Finally, we illustrate how such a capability-driven approach can be applied to Metaverse as an example of a digital technology. Our mapping and classification framework can aid analysts in identifying candidate redesigns that capitalize on the capabilities of digital technologies.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102367"},"PeriodicalIF":2.7,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Timed alignments with mixed moves 混合动作的定时排列
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-28 DOI: 10.1016/j.datak.2024.102366
Neha Rino , Thomas Chatain
{"title":"Timed alignments with mixed moves","authors":"Neha Rino ,&nbsp;Thomas Chatain","doi":"10.1016/j.datak.2024.102366","DOIUrl":"10.1016/j.datak.2024.102366","url":null,"abstract":"<div><div>We study conformance checking for timed models, that is, process models that consider both the sequence of events that occur, as well as the timestamps at which each event is recorded. Time-aware process mining is a growing subfield of research, and as tools that seek to discover timing-related properties in processes develop, so does the need for conformance-checking techniques that can tackle time constraints and provide insightful quality measures for time-aware process models. One of the most useful conformance artefacts is the alignment, that is, finding the minimal changes necessary to correct a new observation to conform to a process model. In this paper, we extend the notion of timed distance from a previous work where an edit on an event’s timestamp came in two types, depending on whether or not it would propagate to its successors. Here, these different types of edits have a weighted cost each, and the ratio of their costs is denoted by <span><math><mi>α</mi></math></span>. We then solve the purely timed alignment problem in this setting for a large class of these weighted distances (corresponding to <span><math><mrow><mi>α</mi><mo>∈</mo><mrow><mo>{</mo><mn>1</mn><mo>}</mo></mrow><mo>∪</mo><mrow><mo>[</mo><mn>2</mn><mo>,</mo><mi>∞</mi><mo>)</mo></mrow></mrow></math></span>). For these distances, we provide linear time algorithms for both distance computation and alignment on models with sequential causal processes.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102366"},"PeriodicalIF":2.7,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State-transition-aware anomaly detection under concept drifts 概念漂移下的状态转换感知异常检测
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-28 DOI: 10.1016/j.datak.2024.102365
Bin Li, Shubham Gupta, Emmanuel Müller
{"title":"State-transition-aware anomaly detection under concept drifts","authors":"Bin Li,&nbsp;Shubham Gupta,&nbsp;Emmanuel Müller","doi":"10.1016/j.datak.2024.102365","DOIUrl":"10.1016/j.datak.2024.102365","url":null,"abstract":"<div><div>Detecting temporal abnormal patterns over streaming data is challenging due to volatile data properties and the lack of real-time labels. The abnormal patterns are usually hidden in the temporal context, which cannot be detected by evaluating single points. Furthermore, the normal state evolves over time due to concept drifts. A single model does not fit all data over time. Autoencoders have recently been applied for unsupervised anomaly detection. However, they are trained on a single normal state and usually become invalid after distributional drifts in the data stream. This paper uses an Autoencoder-based approach STAD for anomaly detection under concept drifts. In particular, we propose a state-transition-aware model to map different data distributions in each period of the data stream into states, thereby addressing the model adaptation problem in an interpretable way. In addition, we analyzed statistical tests to detect the drift by examining the sensitivity and powers. Furthermore, we present considerable ways to estimate the probability density function for comparing the distributional similarity for state transitions. Our experiments evaluate the proposed method on synthetic and real-world datasets. While delivering comparable anomaly detection performance as the state-of-the-art approaches, STAD works more efficiently and provides extra interpretability. We also provide insightful analysis of optimal hyperparameters for efficient model training and adaptation.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102365"},"PeriodicalIF":2.7,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reasoning on responsibilities for optimal process alignment computation 最佳流程对齐计算的责任推理
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-19 DOI: 10.1016/j.datak.2024.102353
Matteo Baldoni, Cristina Baroglio, Elisa Marengo, Roberto Micalizio
{"title":"Reasoning on responsibilities for optimal process alignment computation","authors":"Matteo Baldoni,&nbsp;Cristina Baroglio,&nbsp;Elisa Marengo,&nbsp;Roberto Micalizio","doi":"10.1016/j.datak.2024.102353","DOIUrl":"10.1016/j.datak.2024.102353","url":null,"abstract":"<div><p>Process alignment aims at establishing a matching between a process model run and a log trace. To improve such a matching, process alignment techniques often exploit contextual conditions to enable computations that are more informed than the simple edit distance between model runs and log traces. The paper introduces a novel approach to process alignment which relies on contextual information expressed as <em>responsibilities</em>. The notion of responsibility is fundamental in business and organization models, but it is often overlooked. We show the computation of optimal alignments can take advantage of responsibilities. We leverage on them in two ways. First, responsibilities may sometimes justify deviations. In these cases, we consider them as correct behaviors rather than errors. Second, responsibilities can either be met or neglected in the execution of a trace. Thus, we prefer alignments where neglected responsibilities are minimized.</p><p>The paper proposes a formal framework for responsibilities in a process model, including the definition of cost functions for computing optimal alignments. We also propose a branch-and-bound algorithm for optimal alignment computation and exemplify its usage by way of two event logs from real executions.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102353"},"PeriodicalIF":2.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X24000776/pdfft?md5=df35ebc627d0abaf942b9666c2d2c159&pid=1-s2.0-S0169023X24000776-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big data classification using SpinalNet-Fuzzy-ResNeXt based on spark architecture with data mining approach 使用基于数据挖掘方法的星火架构 SpinalNet-Fuzzy-ResNeXt 进行大数据分类
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-17 DOI: 10.1016/j.datak.2024.102364
M. Robinson Joel , K. Rajakumari , S. Anu Priya , M. Navaneethakrishnan
{"title":"Big data classification using SpinalNet-Fuzzy-ResNeXt based on spark architecture with data mining approach","authors":"M. Robinson Joel ,&nbsp;K. Rajakumari ,&nbsp;S. Anu Priya ,&nbsp;M. Navaneethakrishnan","doi":"10.1016/j.datak.2024.102364","DOIUrl":"10.1016/j.datak.2024.102364","url":null,"abstract":"<div><div>In the modern networking topology, big data is highly essential for several domains like e-commerce, healthcare, and finance. Big data classification has offered effectual performance in several applications. Still, big data classification is highly difficult and the recognized classification approaches require a longer duration and numerous resources for executing the accessible data. For resolving such issues, the spark-based classification approach is required. In this work, the hybrid SpinalNet-Fuzzy-ResNeXt model called SFResNeXt is implemented to classify the big data. Here, the SpinalNet and ResNeXt are merged, where the layers are fused with the fuzzy concept. The initial process is the outlier detection. The Holoentrophy method is used to detect the outlier data, and it is removed. Moreover, duplicate detection is performed by fingerprinting approach to detect the repeated data. The, Association Rule Mining (ARM) method is employed for feature selection. The big data is classified by the SFResNeXt. Furthermore, the SFResNeXt-based big data classification offered the accuracy, sensitivity, and specificity of 0.905, 0.914, and 0.922 using the heart disease dataset.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102364"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRank: Guiding schema selection in NoSQL document stores SRank:指导 NoSQL 文档存储中的模式选择
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-14 DOI: 10.1016/j.datak.2024.102360
Shelly Sachdeva , Neha Bansal , Hardik Bansal
{"title":"SRank: Guiding schema selection in NoSQL document stores","authors":"Shelly Sachdeva ,&nbsp;Neha Bansal ,&nbsp;Hardik Bansal","doi":"10.1016/j.datak.2024.102360","DOIUrl":"10.1016/j.datak.2024.102360","url":null,"abstract":"<div><div>The rise of big data has led to a greater need for applications to change their schema frequently. NoSQL databases provide flexibility in organizing data and offer multiple choices for structuring and storing similar information. While schema flexibility speeds up initial development, choosing schemas wisely is crucial, as they significantly impact performance, affecting data redundancy, navigation cost, data access cost, and maintainability. This paper emphasizes the importance of schema design in NoSQL document stores. It proposes a model to analyze and evaluate different schema alternatives and suggest the best schema out of various schema alternatives. The model is divided into four phases. The model inputs the Entity-Relationship (ER) model and workload queries. In the Transformation Phase, the schema alternatives are initially developed for each ER model, and subsequently, a schema graph is generated for each alternative. Concurrently, workload queries undergo conversion into query graphs. In the Schema Evaluation phase, the Schema Rank (SRank) is calculated for each schema alternative using query metrics derived from the query graphs and path coverage generated from the schema graphs. Finally, in the Output phase, the schema with the highest SRank is recommended as the most suitable choice for the application. The paper includes a case study of a Hotel Reservation System (HRS) to demonstrate the application of the proposed model. It comprehensively evaluates various schema alternatives based on query response time, storage efficiency, scalability, throughput, and latency. The paper validates the SRank computation for schema selection in NoSQL databases through an extensive experimental study. The alignment of SRank values with each schema's performance metrics underscores this ranking system's effectiveness. The SRank simplifies the schema selection process, assisting users in making informed decisions by reducing the time, cost, and effort of identifying the optimal schema for NoSQL document stores.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102360"},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relating behaviour of data-aware process models 数据感知流程模型的相关行为
IF 2.7 3区 计算机科学
Data & Knowledge Engineering Pub Date : 2024-09-12 DOI: 10.1016/j.datak.2024.102363
Marco Montali, Sarah Winkler
{"title":"Relating behaviour of data-aware process models","authors":"Marco Montali,&nbsp;Sarah Winkler","doi":"10.1016/j.datak.2024.102363","DOIUrl":"10.1016/j.datak.2024.102363","url":null,"abstract":"<div><p>Data Petri nets (DPNs) have gained traction as a model for data-aware processes, thanks to their ability to balance simplicity with expressiveness, and because they can be automatically discovered from event logs. While model checking techniques for DPNs have been studied, more complex analysis tasks that are highly relevant for BPM are beyond methods known in the literature. We focus here on equivalence and inclusion of process behaviour with respect to language and configuration spaces, optionally taking data into account. Such comparisons are important in the context of key process mining tasks, namely process repair and discovery, and related to conformance checking. To solve these tasks, we propose approaches for bounded DPNs based on <em>constraint graphs</em>, which are faithful abstractions of the reachable state space. Though the considered verification tasks are undecidable in general, we show that our method is a decision procedure DPNs that admit a <em>finite history set</em>. This property guarantees that constraint graphs are finite and computable, and was shown to hold for large classes of DPNs that are mined automatically, and DPNs presented in the literature. The new techniques are implemented in the tool <span>ada</span>, and an evaluation proving feasibility is provided.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"154 ","pages":"Article 102363"},"PeriodicalIF":2.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X24000879/pdfft?md5=ee932b18bac18fd1e3c1e769269d7d67&pid=1-s2.0-S0169023X24000879-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信