Computer Vision and Image Understanding最新文献

筛选
英文 中文
Bypass network for semantics driven image paragraph captioning 语义驱动的图像段落标题旁路网络
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-06 DOI: 10.1016/j.cviu.2024.104154
{"title":"Bypass network for semantics driven image paragraph captioning","authors":"","doi":"10.1016/j.cviu.2024.104154","DOIUrl":"10.1016/j.cviu.2024.104154","url":null,"abstract":"<div><p>Image paragraph captioning aims to describe a given image with a sequence of coherent sentences. Most existing methods model the coherence through the topic transition that dynamically infers a topic vector from preceding sentences. However, these methods still suffer from immediate or delayed repetitions in generated paragraphs because (i) the entanglement of syntax and semantics distracts the topic vector from attending pertinent visual regions; (ii) there are few constraints or rewards for learning long-range transitions. In this paper, we propose a bypass network that separately models semantics and linguistic syntax of preceding sentences. Specifically, the proposed model consists of two main modules, i.e. a topic transition module and a sentence generation module. The former takes previous semantic vectors as queries and applies attention mechanism on regional features to acquire the next topic vector, which reduces immediate repetition by eliminating linguistics. The latter decodes the topic vector and the preceding syntax state to produce the following sentence. To further reduce delayed repetition in generated paragraphs, we devise a replacement-based reward for the REINFORCE training. Comprehensive experiments on the widely used benchmark demonstrate the superiority of the proposed model over the state of the art for coherence while maintaining high accuracy.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class Probability Space Regularization for semi-supervised semantic segmentation 用于半监督语义分割的类概率空间正规化
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-05 DOI: 10.1016/j.cviu.2024.104146
{"title":"Class Probability Space Regularization for semi-supervised semantic segmentation","authors":"","doi":"10.1016/j.cviu.2024.104146","DOIUrl":"10.1016/j.cviu.2024.104146","url":null,"abstract":"<div><p>Semantic segmentation achieves fine-grained scene parsing in any scenario, making it one of the key research directions to facilitate the development of human visual attention mechanisms. Recent advancements in semi-supervised semantic segmentation have attracted considerable attention due to their potential in leveraging unlabeled data. However, existing methods only focus on exploring the knowledge of unlabeled pixels with high certainty prediction. Their insufficient mining of low certainty regions of unlabeled data results in a significant loss of supervisory information. Therefore, this paper proposes the <strong>C</strong>lass <strong>P</strong>robability <strong>S</strong>pace <strong>R</strong>egularization (<strong>CPSR</strong>) approach to further exploit the potential of each unlabeled pixel. Specifically, we first design a class knowledge reshaping module to regularize the probability space of low certainty pixels, thereby transforming them into high certainty ones for supervised training. Furthermore, we propose a tail probability suppression module to suppress the probabilities of tailed classes, which facilitates the network to learn more discriminative information from the class probability space. Extensive experiments conducted on the PASCAL VOC2012 and Cityscapes datasets prove that our method achieves state-of-the-art performance without introducing much computational overhead. Code is available at <span><span>https://github.com/MKSAQW/CPSR</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142240203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedding AI ethics into the design and use of computer vision technology for consumer’s behaviour understanding 将人工智能伦理纳入计算机视觉技术的设计和使用,以了解消费者行为
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-04 DOI: 10.1016/j.cviu.2024.104142
{"title":"Embedding AI ethics into the design and use of computer vision technology for consumer’s behaviour understanding","authors":"","doi":"10.1016/j.cviu.2024.104142","DOIUrl":"10.1016/j.cviu.2024.104142","url":null,"abstract":"<div><p>Artificial Intelligence (AI) techniques are becoming more and more sophisticated showing the potential to deeply understand and predict consumer behaviour in a way to boost the retail sector; however, retail-sensitive considerations underpinning their deployment have been poorly explored to date. This paper explores the application of AI technologies in the retail sector, focusing on their potential to enhance decision-making processes by preventing major ethical risks inherent to them, such as the propagation of bias and systems’ lack of explainability. Drawing on recent literature on AI ethics, this study proposes a methodological path for the design and the development of trustworthy, unbiased, and more explainable AI systems in the retail sector. Such framework grounds on European (EU) AI ethics principles and addresses the specific nuances of retail applications. To do this, we first examine the VRAI framework, a deep learning model used to analyse shopper interactions, people counting and re-identification, to highlight the critical need for transparency and fairness in AI operations. Second, the paper proposes actionable strategies for integrating high-level ethical guidelines into practical settings, and particularly, to mitigate biases leading to unfair outcomes in AI systems and improve their explainability. By doing so, the paper aims to show the key added value of embedding AI ethics requirements into AI practices and computer vision technology to truly promote technically and ethically robust AI in the retail domain.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1077314224002236/pdfft?md5=e8ae2d0422401ca2e5087d68686b6387&pid=1-s2.0-S1077314224002236-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acoustic features analysis for explainable machine learning-based audio spoofing detection 基于可解释机器学习的音频欺骗检测的声学特征分析
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-04 DOI: 10.1016/j.cviu.2024.104145
{"title":"Acoustic features analysis for explainable machine learning-based audio spoofing detection","authors":"","doi":"10.1016/j.cviu.2024.104145","DOIUrl":"10.1016/j.cviu.2024.104145","url":null,"abstract":"<div><p>The rapid evolution of synthetic voice generation and audio manipulation technologies poses significant challenges, raising societal and security concerns due to the risks of impersonation and the proliferation of audio deepfakes. This study introduces a lightweight machine learning (ML)-based framework designed to effectively distinguish between genuine and spoofed audio recordings. Departing from conventional deep learning (DL) approaches, which mainly rely on image-based spectrogram features or learning-based audio features, the proposed method utilizes a diverse set of hand-crafted audio features – such as spectral, temporal, chroma, and frequency-domain features – to enhance the accuracy of deepfake audio content detection. Through extensive evaluation and experiments on three well-known datasets, ASVSpoof2019, FakeAVCelebV2, and an In-The-Wild database, the proposed solution demonstrates robust performance and a high degree of generalization compared to state-of-the-art methods. In particular, our method achieved 89% accuracy on ASVSpoof2019, 94.5% on FakeAVCelebV2, and 94.67% on the In-The-Wild database. Additionally, the experiments performed on explainability techniques clarify the decision-making processes within ML models, enhancing transparency and identifying crucial features essential for audio deepfake detection.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1077314224002261/pdfft?md5=fececaff6052bf1288f308ae6213933a&pid=1-s2.0-S1077314224002261-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRML-Net: Cross-Modal Reasoning and Multi-Task Learning Network for tooth image segmentation CRML-Net:用于牙齿图像分割的跨模态推理和多任务学习网络
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-02 DOI: 10.1016/j.cviu.2024.104138
{"title":"CRML-Net: Cross-Modal Reasoning and Multi-Task Learning Network for tooth image segmentation","authors":"","doi":"10.1016/j.cviu.2024.104138","DOIUrl":"10.1016/j.cviu.2024.104138","url":null,"abstract":"<div><p>Data from a single modality may suffer from noise, low contrast, or other imaging limitations that affect the model’s accuracy. Furthermore, due to the limited amount of data, most models trained on single-modality data tend to overfit the training set and perform poorly on out-of-domain data. Therefore, in this paper, we propose a network named Cross-Modal Reasoning and Multi-Task Learning Network (CRML-Net), which combines cross-modal reasoning and multi-task learning, aiming to leverage the complementary information between different modalities and tasks to enhance the model’s generalization ability and accuracy. Specifically, CRML-Net consists of two stages. In the first stage, our network extracts a new morphological information modality from the original image and then performs cross-modal fusion with the original modality image, aiming to leverage the morphological information to enhance the model’s robustness to out-of-domain datasets. In the second stage, based on the output of the previous stage, we introduce a multi-task learning mechanism, aiming to improve the model’s performance on unseen data by sharing surface detail information from auxiliary tasks. We validated our method on a publicly available tooth cone beam computed tomography dataset. Our evaluation demonstrates that our method outperforms state-of-the-art approaches.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual cross-enhancement network for highly accurate dichotomous image segmentation 用于高精度二分图像分割的双交叉增强网络
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-09-02 DOI: 10.1016/j.cviu.2024.104122
{"title":"Dual cross-enhancement network for highly accurate dichotomous image segmentation","authors":"","doi":"10.1016/j.cviu.2024.104122","DOIUrl":"10.1016/j.cviu.2024.104122","url":null,"abstract":"<div><p>The existing image segmentation tasks mainly focus on segmenting objects with specific characteristics, such as salient, camouflaged, and meticulous objects, etc. However, the research of highly accurate Dichotomous Image Segmentation (DIS) combining these tasks has just started and still faces problems such as insufficient information interaction between layers and incomplete integration of high-level semantic information and low-level detailed features. In this paper, a new dual cross-enhancement network (DCENet) for highly accurate DIS is proposed, which mainly consists of two new modules: a cross-scaling guidance (CSG) module and a semantic cross-transplantation (SCT) module. Specifically, the CSG module adopts the adjacent-layer cross-scaling guidance method, which can efficiently interact with the multi-scale features of the adjacent layers extracted; the SCT module uses dual-branch features to complement each other. Moreover, in the way of transplantation, the high-level semantic information of the low-resolution branch is used to guide the low-level detail features of the high-resolution branch, and the features of different resolution branches are effectively fused. Finally, experimental results on the challenging DIS5K benchmark dataset show that the proposed network outperforms the 9 state-of-the-art (SOTA) networks in 5 widely used evaluation metrics. In addition, the ablation experiments also demonstrate the effectiveness of the cross-scaling guidance module and the semantic cross-transplantation module.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VADS: Visuo-Adaptive DualStrike attack on visual question answer VADS:视觉问题解答的 Visuo-Adaptive DualStrike 攻击
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-08-31 DOI: 10.1016/j.cviu.2024.104137
{"title":"VADS: Visuo-Adaptive DualStrike attack on visual question answer","authors":"","doi":"10.1016/j.cviu.2024.104137","DOIUrl":"10.1016/j.cviu.2024.104137","url":null,"abstract":"<div><p>Visual Question Answering (VQA) is a fundamental task in computer vision and natural language process fields. The adversarial vulnerability of VQA models is crucial for their reliability in real-world applications. However, current VQA attacks are mainly focused on the white-box and transfer-based settings, which require the attacker to have full or partial prior knowledge of victim VQA models. Besides that, query-based VQA attacks require a massive amount of query times, which the victim model may detect. In this paper, we propose the Visuo-Adaptive DualStrike (VADS) attack, a novel adversarial attack method combining transfer-based and query-based strategies to exploit vulnerabilities in VQA systems. Unlike current VQA attacks focusing on either approach, VADS leverages a momentum-like ensemble method to search potential attack targets and compress the perturbation. After that, our method employs a query-based strategy to dynamically adjust the weight of perturbation per surrogate model. We evaluate the effectiveness of VADS across 8 VQA models and two datasets. The results demonstrate that VADS outperforms existing adversarial techniques in both efficiency and success rate. Our code is available at: <span><span>https://github.com/stevenzhang9577/VADS</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symmetrical Siamese Network for pose-guided person synthesis 用于姿势引导的人物合成的对称连体网络
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-08-28 DOI: 10.1016/j.cviu.2024.104134
{"title":"Symmetrical Siamese Network for pose-guided person synthesis","authors":"","doi":"10.1016/j.cviu.2024.104134","DOIUrl":"10.1016/j.cviu.2024.104134","url":null,"abstract":"<div><p>Pose-Guided Person Image Synthesis (PGPIS) aims to generate a realistic person image that preserves the appearance of the source person while adopting the target pose. Various appearances and drastic pose changes make this task highly challenging. Due to the insufficient utilization of paired data, existing models face difficulties in accurately preserving the source appearance details and high-frequency textures in the generated images. Meanwhile, although current popular AdaIN-based methods are advantageous in handling drastic pose changes, they struggle to capture diverse clothing shapes imposed by the limitation of global feature statistics. To address these issues, we propose a novel Symmetrical Siamese Network (SSNet) for PGPIS, which consists of two synergistic symmetrical generative branches that leverage prior knowledge of paired data to comprehensively exploit appearance details. For feature integration, we propose a Style Matching Module (SMM) to transfer multi-level region appearance styles and gradient information to the desired pose for enriching the high-frequency textures. Furthermore, to overcome the limitation of global feature statistics, a Spatial Attention Module (SAM) is introduced to complement the SMM for capturing clothing shapes. Extensive experiments show the effectiveness of our SSNet, achieving state-of-the-art results on public datasets. Moreover, our SSNet can also edit the source appearance attributes, making it versatile in wider application scenarios.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An egocentric video and eye-tracking dataset for visual search in convenience stores 用于便利店视觉搜索的自我中心视频和眼动跟踪数据集
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-08-28 DOI: 10.1016/j.cviu.2024.104129
{"title":"An egocentric video and eye-tracking dataset for visual search in convenience stores","authors":"","doi":"10.1016/j.cviu.2024.104129","DOIUrl":"10.1016/j.cviu.2024.104129","url":null,"abstract":"<div><p>We introduce an egocentric video and eye-tracking dataset, comprised of 108 first-person videos of 36 shoppers searching for three different products (orange juice, KitKat chocolate bars, and canned tuna) in a convenience store, along with the frame-centered eye fixation locations for each video frame. The dataset also includes demographic information about each participant in the form of an 11-question survey. The paper describes two applications using the dataset — an analysis of eye fixations during search in the store, and a training of a clustered saliency model for predicting saliency of viewers engaged in product search in the store. The fixation analysis shows that fixation duration statistics are very similar to those found in image and video viewing, suggesting that similar visual processing is employed during search in 3D environments and during viewing of imagery on computer screens. A clustering technique was applied to the questionnaire data, which resulted in two clusters being detected. Based on these clusters, personalized saliency prediction models were trained on the store fixation data, which provided improved performance in prediction saliency on the store video data compared to state-of-the art universal saliency prediction methods.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1077314224002108/pdfft?md5=dfee816a569ed0f626ef6b190cabb0bc&pid=1-s2.0-S1077314224002108-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
URINet: Unsupervised point cloud rotation invariant representation learning via semantic and structural reasoning URINet:通过语义和结构推理进行无监督点云旋转不变表示学习
IF 4.3 3区 计算机科学
Computer Vision and Image Understanding Pub Date : 2024-08-28 DOI: 10.1016/j.cviu.2024.104136
{"title":"URINet: Unsupervised point cloud rotation invariant representation learning via semantic and structural reasoning","authors":"","doi":"10.1016/j.cviu.2024.104136","DOIUrl":"10.1016/j.cviu.2024.104136","url":null,"abstract":"<div><p>In recent years, many rotation-invariant networks have been proposed to alleviate the interference caused by point cloud arbitrary rotations. These networks have demonstrated powerful representation learning capabilities. However, most of those methods rely on costly manually annotated supervision for model training. Moreover, they fail to reason the structural relations and lose global information. To address these issues, we present an unsupervised method for achieving comprehensive rotation invariant representations without human annotation. Specifically, we propose a novel encoder–decoder architecture named URINet, which learns a point cloud representation by combining local semantic and global structural information, and then reconstructs the input without rotation perturbation. In detail, the encoder is a two-branch network where the graph convolution based structural branch models the relationships among local regions to learn global structural knowledge and the semantic branch learns rotation invariant local semantic features. The two branches derive complementary information and explore the point clouds comprehensively. Furthermore, to avoid the self-reconstruction ambiguity brought by uncertain poses, a bidirectional alignment is proposed to measure the quality of reconstruction results without orientation knowledge. Extensive experiments on downstream tasks show that the proposed method significantly surpasses existing state-of-the-art methods on both synthetic and real-world datasets.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信