BenchCouncil Transactions on Benchmarks, Standards and Evaluations最新文献

筛选
英文 中文
Open-source computer systems initiative: The motivation, essence, challenges, and methodology 开源计算机系统计划:动机、本质、挑战和方法
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2022-03-01 DOI: 10.1016/j.tbench.2022.100038
Jianfeng Zhan
{"title":"Open-source computer systems initiative: The motivation, essence, challenges, and methodology","authors":"Jianfeng Zhan","doi":"10.1016/j.tbench.2022.100038","DOIUrl":"https://doi.org/10.1016/j.tbench.2022.100038","url":null,"abstract":"<div><p>The global community faces many pressing and uncertain challenges like pandemics and global climate change. Information technology (IT) infrastructure has become the enabler to addressing those challenges. Unfortunately, IT decoupling has distracted and weakened the international community’s ability to handle those challenges.</p><p>This article initiates an open-source computer system (OSCS) initiative to tackle the challenges of IT decoupling. The OSCS movement is where open-source software converges with open-source hardware. Its essential is to utilize the inherent characteristics of a class of representative workloads and propose innovative abstraction and methodology to co-explore the software and hardware design spaces of high-end computer systems, attaining peak performance, security, and other fundamental dimensions. I discuss its four challenges, including the system complexity, the tradeoff between universal and ideal systems, guaranteeing quality of computation results and performance under different conditions, e.g., best-case, worst-case, or average-case, and balancing legal, patent, and license issues.</p><p>Inspired by the philosophy of building large systems out of smaller functions, I propose the funclet abstraction and methodology to tackle the first challenge. The funclet abstraction is a well-defined, evolvable, reusable, independently deployable, and testable functionality with modest complexity. Each funclet interoperates with other funclets through standard bus interfaces or interconnections. Four funclet building blocks: chiplet, HWlet, envlet, and servlet at the chip, hardware, environment management, and service layers form the four-layer funclet architecture. The advantages of the funclet abstraction and architecture are discussed. The project’s website is publicly available from <span>https://www.opensourcecomputer.org</span><svg><path></path></svg> or <span>https://www.computercouncil.org</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 1","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485922000254/pdfft?md5=918af912e65cb9e5c5712c174cc420e9&pid=1-s2.0-S2772485922000254-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137288745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Call for consistent benchmarking across multi-disciplines 呼吁在多学科之间建立一致的基准
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-12-01 DOI: 10.1016/j.tbench.2021.100012
Jianfeng Zhan
{"title":"Call for consistent benchmarking across multi-disciplines","authors":"Jianfeng Zhan","doi":"10.1016/j.tbench.2021.100012","DOIUrl":"https://doi.org/10.1016/j.tbench.2021.100012","url":null,"abstract":"","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88626666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance optimization opportunities in the Android software stack Android软件栈中的性能优化机会
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100003
Varun Gohil , Nisarg Ujjainkar , Joycee Mekie , Manu Awasthi
{"title":"Performance optimization opportunities in the Android software stack","authors":"Varun Gohil ,&nbsp;Nisarg Ujjainkar ,&nbsp;Joycee Mekie ,&nbsp;Manu Awasthi","doi":"10.1016/j.tbench.2021.100003","DOIUrl":"10.1016/j.tbench.2021.100003","url":null,"abstract":"<div><p>The smartphone hardware and software ecosystems have evolved very rapidly. Multiple innovations in the system software, including OS, languages, and runtimes have been made in the last decade. Although, performance characterization of microarchitecture has been done, there is little analysis available for application performance bottlenecks of the system software stack, especially for contemporary applications on mobile operating systems.</p><p>In this work, we perform system utilization analysis from a software perspective, thereby supplementing the hardware perspective offered by prior work. We focus our analysis on Android powered smartphones, running newer versions of Android. Using 11 representative apps and regions of interest within them, we carry out performance analysis of the entire Android software stack to identify system performance bottlenecks.</p><p>We observe that for the majority of apps, the most time-consuming system level thread is a frame rendering thread. However, more surprisingly, our results indicate that <em>all apps</em> spend a significant amount of time doing Inter Process Communication (IPC), hinting that the Android IPC stack is a ripe target for performance optimization via software development and a potential target for hardware acceleration.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100003"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277248592100003X/pdfft?md5=3477132301a132ff0fdf5f9370443f35&pid=1-s2.0-S277248592100003X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83697576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVDI25K: A large-scale dataset of microscopic vaginal discharge images MVDI25K:显微阴道分泌物图像的大规模数据集
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100008
Lin Li , Jingyi Liu , Fei Yu , Xunkun Wang , Tian-Zhu Xiang
{"title":"MVDI25K: A large-scale dataset of microscopic vaginal discharge images","authors":"Lin Li ,&nbsp;Jingyi Liu ,&nbsp;Fei Yu ,&nbsp;Xunkun Wang ,&nbsp;Tian-Zhu Xiang","doi":"10.1016/j.tbench.2021.100008","DOIUrl":"10.1016/j.tbench.2021.100008","url":null,"abstract":"<div><p>With the widespread application of artificial intelligence technology in the field of biomedical images, the deep learning-based detection of vaginal discharge, an important but challenging topic in medical image processing, has drawn an increasing amount of research interest. Although the past few decades have witnessed major advances in object detection of natural scenes, such successes have been slow to medical images, not only because of the complex background and diverse cell morphology in the microscope images, but also due to the scarcity of well-annotated datasets of objects in medical images. Until now, in most hospitals in China, the vaginal diseases are often checked by observation of cell morphology using the microscope manually, or observation of the color reaction experiment by inspectors, which are time-consuming, inefficient and easily interfered by subjective factors. To this end, we elaborately construct the first large-scale dataset of <strong>m</strong>icroscopic <strong>v</strong>aginal <strong>d</strong>ischarge <strong>i</strong>mages, named <strong><em>MVDI25K</em></strong>, which consists of 25,708 images covering 10 cell categories related to vaginal discharge detection. All the images in <em>MVDI25K</em> dataset are carefully annotated by experts with bounding-box and object-level labels. In addition, we conduct a systematical benchmark experiments on <em>MVDI25K</em> dataset with 10 representative state-of-the-art (SOTA) deep models focusing on two key tasks, <em>i.e.</em>, object detection and object segmentation. Our research offers the community an opportunity to explore more in this new field.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100008"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000089/pdfft?md5=d1824b70c714277bd224e6db44b1b71a&pid=1-s2.0-S2772485921000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89969530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A parallel sparse approximate inverse preconditioning algorithm based on MPI and CUDA 一种基于MPI和CUDA的并行稀疏近似逆预处理算法
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100007
Yizhou Wang, Wenhao Li, Jiaquan Gao
{"title":"A parallel sparse approximate inverse preconditioning algorithm based on MPI and CUDA","authors":"Yizhou Wang,&nbsp;Wenhao Li,&nbsp;Jiaquan Gao","doi":"10.1016/j.tbench.2021.100007","DOIUrl":"10.1016/j.tbench.2021.100007","url":null,"abstract":"<div><p>In this study, we present an efficient parallel sparse approximate inverse (SPAI) preconditioning algorithm based on MPI and CUDA, called HybridSPAI. For HybridSPAI, it optimizes a latest static SPAI preconditioning algorithm, and is extended from one GPU to multiple GPUs in order to process large-scale matrices. We make the following significant contributions: (1) a general parallel framework for optimizing the static SPAI preconditioner based on MPI and CUDA is presented, and (2) for each component of the preconditioner, a decision tree is established to choose the optimal kernel of computing it. Experimental results show that HybridSPAI is effective, and outperforms the popular preconditioning algorithms in two public libraries, and a latest parallel SPAI preconditioning algorithm.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100007"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000077/pdfft?md5=acaf310d54e04f99040f007213bf2d56&pid=1-s2.0-S2772485921000077-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91535551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MLHarness: A scalable benchmarking system for MLCommons MLHarness:一个可扩展的MLCommons基准测试系统
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100002
Yen-Hsiang Chang , Jianhao Pu , Wen-mei Hwu , Jinjun Xiong
{"title":"MLHarness: A scalable benchmarking system for MLCommons","authors":"Yen-Hsiang Chang ,&nbsp;Jianhao Pu ,&nbsp;Wen-mei Hwu ,&nbsp;Jinjun Xiong","doi":"10.1016/j.tbench.2021.100002","DOIUrl":"10.1016/j.tbench.2021.100002","url":null,"abstract":"<div><p>With the society’s growing adoption of machine learning (ML) and deep learning (DL) for various intelligent solutions, it becomes increasingly imperative to standardize a common set of measures for ML/DL models with large scale open datasets under common development practices and resources so that people can benchmark and compare models’ quality and performance on a common ground. MLCommons has emerged recently as a driving force from both industry and academia to orchestrate such an effort. Despite its wide adoption as standardized benchmarks, MLCommons Inference has only included a limited number of ML/DL models (in fact seven models in total). This significantly limits the generality of MLCommons Inference’s benchmarking results because there are many more novel ML/DL models from the research community, solving a wide range of problems with different inputs and outputs modalities. To address such a limitation, we propose MLHarness, a scalable benchmarking harness system for MLCommons Inference with three distinctive features: (1) it codifies the standard benchmark process as defined by MLCommons Inference including the models, datasets, DL frameworks, and software and hardware systems; (2) it provides an easy and declarative approach for model developers to contribute their models and datasets to MLCommons Inference; and (3) it includes the support of a wide range of models with varying inputs/outputs modalities so that we can scalably benchmark these models across different datasets, frameworks, and hardware systems. This harness system is developed on top of the MLModelScope system, and will be open sourced to the community. Our experimental results demonstrate the superior flexibility and scalability of this harness system for MLCommons Inference benchmarking.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100002"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000028/pdfft?md5=7f9c2c5bfe8e2572b956bae3089e8207&pid=1-s2.0-S2772485921000028-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88253393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Stars shine: The report of 2021 BenchCouncil awards 明星闪耀:2021年BenchCouncil奖项报告
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100013
Taotao Zhan , Simin Chen
{"title":"Stars shine: The report of 2021 BenchCouncil awards","authors":"Taotao Zhan ,&nbsp;Simin Chen","doi":"10.1016/j.tbench.2021.100013","DOIUrl":"10.1016/j.tbench.2021.100013","url":null,"abstract":"<div><p>This report introduces the awards presented by the International Open Benchmark Council (BenchCouncil) in 2021 and highlights the award selection rules, committee, awardees, and their contributions.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100013"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000132/pdfft?md5=825c9b2b90b3b0eada52051c0a7afac6&pid=1-s2.0-S2772485921000132-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88306293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fallout: Distributed systems testing as a service 余波:分布式系统测试即服务
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100010
Matt Fleming, Guy Bolton King, Sean McCarthy, Jake Luciani, Pushkala Pattabhiraman
{"title":"Fallout: Distributed systems testing as a service","authors":"Matt Fleming,&nbsp;Guy Bolton King,&nbsp;Sean McCarthy,&nbsp;Jake Luciani,&nbsp;Pushkala Pattabhiraman","doi":"10.1016/j.tbench.2021.100010","DOIUrl":"10.1016/j.tbench.2021.100010","url":null,"abstract":"<div><p>All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000107/pdfft?md5=6a996ef2f804ec79d157461e3b7e2fba&pid=1-s2.0-S2772485921000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85673397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparative evaluation of deep learning workloads for leadership-class systems 领导力类系统深度学习工作量的比较评估
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100005
Junqi Yin, Aristeidis Tsaris, Sajal Dash, Ross Miller, Feiyi Wang, Mallikarjun (Arjun) Shankar
{"title":"Comparative evaluation of deep learning workloads for leadership-class systems","authors":"Junqi Yin,&nbsp;Aristeidis Tsaris,&nbsp;Sajal Dash,&nbsp;Ross Miller,&nbsp;Feiyi Wang,&nbsp;Mallikarjun (Arjun) Shankar","doi":"10.1016/j.tbench.2021.100005","DOIUrl":"10.1016/j.tbench.2021.100005","url":null,"abstract":"<div><p>Deep learning (DL) workloads and their performance at scale are becoming important factors to consider as we design, develop and deploy next-generation high-performance computing systems. Since DL applications rely heavily on DL frameworks and underlying compute (CPU/GPU) stacks, it is essential to gain a holistic understanding from compute kernels, models, and frameworks of popular DL stacks, and to assess their impact on science-driven, mission-critical applications. At Oak Ridge Leadership Computing Facility (OLCF), we employ a set of micro and macro DL benchmarks established through the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) to evaluate the AI readiness of our next-generation supercomputers. In this paper, we present our early observations and performance benchmark comparisons between the Nvidia V100 based Summit system with its CUDA stack and an AMD MI100 based testbed system with its ROCm stack. We take a layered perspective on DL benchmarking and point to opportunities for future optimizations in the technologies that we consider.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100005"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000053/pdfft?md5=7170efb2f45da50210176495650c4232&pid=1-s2.0-S2772485921000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76454943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Call for establishing benchmark science and engineering 呼吁建立科学与工程标杆
BenchCouncil Transactions on Benchmarks, Standards and Evaluations Pub Date : 2021-10-01 DOI: 10.1016/j.tbench.2021.100012
Jianfeng Zhan
{"title":"Call for establishing benchmark science and engineering","authors":"Jianfeng Zhan","doi":"10.1016/j.tbench.2021.100012","DOIUrl":"https://doi.org/10.1016/j.tbench.2021.100012","url":null,"abstract":"<div><p>Currently, there is no consistent benchmarking across multi-disciplines. Even no previous work tries to relate different categories of benchmarks in multi-disciplines. This article investigates the origin and evolution of the benchmark term. Five categories of benchmarks are summarized, including measurement standards, standardized data sets with defined properties, representative workloads, representative data sets, and best practices, which widely exist in multi-disciplines. I believe there are two pressing challenges in growing this discipline: establishing consistent benchmarking across multi-disciplines and developing meta-benchmark to measure the benchmarks themselves. I propose establishing benchmark science and engineering; one of the primary goals is to set up a standard benchmark hierarchy across multi-disciplines. It is the right time to launch a multi-disciplinary benchmark, standard, and evaluation journal, TBench, to communicate the state-of-the-art and state-of-the-practice of benchmark science and engineering.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"1 1","pages":"Article 100012"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485921000120/pdfft?md5=a1ed86c4fa15d92ea898e2111c96d7b9&pid=1-s2.0-S2772485921000120-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92003798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信