Eurasip Journal on Image and Video Processing最新文献

筛选
英文 中文
Advanced fine-tuning procedures to enhance DNN robustness in visual coding for machines 在机器视觉编码中增强 DNN 稳健性的高级微调程序
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-18 DOI: 10.1186/s13640-024-00650-3
Alban Marie, Karol Desnos, Alexandre Mercat, Luce Morin, Jarno Vanne, Lu Zhang
{"title":"Advanced fine-tuning procedures to enhance DNN robustness in visual coding for machines","authors":"Alban Marie, Karol Desnos, Alexandre Mercat, Luce Morin, Jarno Vanne, Lu Zhang","doi":"10.1186/s13640-024-00650-3","DOIUrl":"https://doi.org/10.1186/s13640-024-00650-3","url":null,"abstract":"<p>Video Coding for Machines (VCM) is gaining momentum in applications like autonomous driving, industry manufacturing, and surveillance, where the robustness of machine learning algorithms against coding artifacts is one of the key success factors. This work complements the MPEG/JVET standardization efforts in improving the resilience of deep neural network (DNN)-based machine models against such coding artifacts by proposing the following three advanced fine-tuning procedures for their training: (1) the progressive increase of the distortion strength as the training proceeds; (2) the incorporation of a regularization term in the original loss function to minimize the distance between predictions on compressed and original content; and (3) a joint training procedure that combines the proposed two approaches. These proposals were evaluated against a conventional fine-tuning anchor on two different machine tasks and datasets: image classification on ImageNet and semantic segmentation on Cityscapes. Our joint training procedure is shown to reduce the training time in both cases and still obtain a 2.4% coding gain in image classification and 7.4% in semantic segmentation, whereas a slight increase in training time can bring up to 9.4% better coding efficiency for the segmentation. All these coding gains are obtained without any additional inference or encoding time. As these advanced fine-tuning procedures are standard-compliant, they offer the potential to have a significant impact on visual coding for machine applications.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel multiscale cGAN approach for enhanced salient object detection in single haze images 用于增强单幅灰霾图像中突出物体检测的新型多尺度 cGAN 方法
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-15 DOI: 10.1186/s13640-024-00648-x
Gayathri Dhara, Ravi Kant Kumar
{"title":"A novel multiscale cGAN approach for enhanced salient object detection in single haze images","authors":"Gayathri Dhara, Ravi Kant Kumar","doi":"10.1186/s13640-024-00648-x","DOIUrl":"https://doi.org/10.1186/s13640-024-00648-x","url":null,"abstract":"<p>In computer vision, image dehazing is a low-level task that employs algorithms to analyze and remove haze from images, resulting in haze-free visuals. The aim of Salient Object Detection (SOD) is to locate the most visually prominent areas in images. However, most SOD techniques applied to visible images struggle in complex scenarios characterized by similarities between the foreground and background, cluttered backgrounds, adverse weather conditions, and low lighting. Identifying objects in hazy images is challenging due to the degradation of visibility caused by atmospheric conditions, leading to diminished visibility and reduced contrast. This paper introduces an innovative approach called Dehaze-SOD, a unique integrated model that addresses two vital tasks: dehazing and salient object detection. The key novelty of Dehaze-SOD lies in its dual functionality, seamlessly integrating dehazing and salient object identification into a unified framework. This is achieved using a conditional Generative Adversarial Network (cGAN) comprising two distinct subnetworks: one for image dehazing and another for salient object detection. The first module, designed with residual blocks, Dark Channel Prior (DCP), total variation, and the multiscale Retinex algorithm, processes the input hazy images. The second module employs an enhanced EfficientNet architecture with added attention mechanisms and pixel-wise refinement to further improve the dehazing process. The outputs from these subnetworks are combined to produce dehazed images, which are then fed into our proposed encoder–decoder framework for salient object detection. The cGAN is trained with two modules working together: the generator aims to produce haze-free images, whereas the discriminator distinguishes between the generated haze-free images and real haze-free images. Dehaze-SOD demonstrates superior performance compared to state-of-the-art dehazing methods in terms of color fidelity, visibility enhancement, and haze removal. The proposed method effectively produces high-quality, haze-free images from various hazy inputs and accurately detects salient objects within them. This makes Dehaze-SOD a promising tool for improving salient object detection in challenging hazy conditions. The effectiveness of our approach has been validated using benchmark evaluation metrics such as mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM).</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of parameters for image denoising algorithm pertaining to generalized Caputo-Fabrizio fractional operator 优化与广义卡普托-法布里齐奥分数算子有关的图像去噪算法参数
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-13 DOI: 10.1186/s13640-024-00632-5
S. Gaur, A. M. Khan, D. L. Suthar
{"title":"Optimization of parameters for image denoising algorithm pertaining to generalized Caputo-Fabrizio fractional operator","authors":"S. Gaur, A. M. Khan, D. L. Suthar","doi":"10.1186/s13640-024-00632-5","DOIUrl":"https://doi.org/10.1186/s13640-024-00632-5","url":null,"abstract":"<p>The aim of the present paper is to optimize the values of different parameters related to the image denoising algorithm involving Caputo Fabrizio fractional integral operator of non-singular type with the Mittag-Leffler function in generalized form. The algorithm aims to find the coefficients of a kernel to remove the noise from images. The optimization of kernel coefficients are done on the basis of different numerical parameters like Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structure Similarity Index measure (SSIM) and Image Enhancement Factor (IEF). The performance of the proposed algorithm is investigated through above-mentioned numeric parameters and visual perception with the other prevailed algorithms. Experimental results demonstrate that the proposed optimized kernel based on generalized fractional operator performs favorably compared to state of the art methods. The uniqueness of the paper is to highlight the optimized values of performance parameters for different values of fractional order. The novelty of the presented work lies in the development of a kernel utilizing coefficients from a fractional integral operator, specifically involving the Mittag-Leffler function in a more generalized form.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utility-based performance evaluation of biometric sample quality measures 基于效用的生物识别样本质量测量性能评估
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-09 DOI: 10.1186/s13640-024-00644-1
Olaf Henniger, Biying Fu, Alexander Kurz
{"title":"Utility-based performance evaluation of biometric sample quality measures","authors":"Olaf Henniger, Biying Fu, Alexander Kurz","doi":"10.1186/s13640-024-00644-1","DOIUrl":"https://doi.org/10.1186/s13640-024-00644-1","url":null,"abstract":"<p>The quality score of a biometric sample is intended to predict the sample’s degree of utility for biometric recognition. Different authors proposed different definitions for utility. A harmonized definition of utility would be useful to facilitate the comparison of biometric sample quality assessment algorithms. In this article, we compare different definitions of utility and apply them to both face image and fingerprint image data sets containing multiple samples per biometric instance and covering a wide range of potential quality issues. The results differ only slightly. We show that discarding samples with low utility scores results in rapidly declining false non-match rates. The obtained utility scores can be used as target labels for training biometric sample quality assessment algorithms and as baseline when summarizing utility-prediction performance in a single plot or even in a single figure of merit.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the visible: thermal data for facial soft biometric estimation 超越可见光:用于面部软生物识别估算的热数据
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-06 DOI: 10.1186/s13640-024-00640-5
Nelida Mirabet-Herranz, Jean-Luc Dugelay
{"title":"Beyond the visible: thermal data for facial soft biometric estimation","authors":"Nelida Mirabet-Herranz, Jean-Luc Dugelay","doi":"10.1186/s13640-024-00640-5","DOIUrl":"https://doi.org/10.1186/s13640-024-00640-5","url":null,"abstract":"<p>In recent years, the estimation of biometric parameters from facial visuals, including images and videos, has emerged as a prominent area of research. However, the robustness of deep learning-based models is challenged, particularly in the presence of changing illumination conditions. To overcome these limitations and unlock new opportunities, thermal imagery has arisen as a viable alternative. Nevertheless, the limited availability of datasets containing thermal data and the small amount of annotations on them limits the exploration of this spectrum. Motivated by this gap, this paper introduces the Label-EURECOM Visible and Thermal (LVT) Face Dataset for face biometrics. This pioneering dataset includes paired visible and thermal images and videos from 52 subjects along with metadata of 22 soft biometrics and health parameters. Due to the reduced number of existing datasets in this domain, the LVT Face Dataset aims to facilitate further research and advancements in the utilization of thermal imagery for diverse eHealth applications and soft biometric estimation. Moreover, we present the first comparative study between visible and thermal spectra as input images for soft biometric estimation, namely gender age and weight, from face images on our collected dataset.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contactless hand biometrics for forensics: review and performance benchmark 用于取证的非接触式手部生物识别技术:回顾与性能基准
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-05 DOI: 10.1186/s13640-024-00642-3
Lazaro Janier Gonzalez-Soler, Kacper Marek Zyla, Christian Rathgeb, Daniel Fischer
{"title":"Contactless hand biometrics for forensics: review and performance benchmark","authors":"Lazaro Janier Gonzalez-Soler, Kacper Marek Zyla, Christian Rathgeb, Daniel Fischer","doi":"10.1186/s13640-024-00642-3","DOIUrl":"https://doi.org/10.1186/s13640-024-00642-3","url":null,"abstract":"<p>Contactless hand biometrics has emerged as an alternative to traditional biometric characteristics, e.g., fingerprint or face, as it possesses distinctive properties that are of interest in forensic investigations. As a result, several hand-based recognition techniques have been proposed with the aim of identifying both wanted criminals and missing victims. The great success of deep neural networks and their application in a variety of computer vision and pattern recognition tasks has led to hand-based algorithms achieving high identification performance on controlled images with few variations in, e.g., background context and hand gestures. This article provides a comprehensive review of the scientific literature focused on contactless hand biometrics together with an in-depth analysis of the identification performance of freely available deep learning-based hand recognition systems under various scenarios. Based on the performance benchmark, the relevant technical considerations and trade-offs of state-of-the-art methods are discussed, as well as further topics related to this research field.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face image de-identification based on feature embedding 基于特征嵌入的人脸图像识别
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-09-02 DOI: 10.1186/s13640-024-00646-z
Goki Hanawa, Koichi Ito, Takafumi Aoki
{"title":"Face image de-identification based on feature embedding","authors":"Goki Hanawa, Koichi Ito, Takafumi Aoki","doi":"10.1186/s13640-024-00646-z","DOIUrl":"https://doi.org/10.1186/s13640-024-00646-z","url":null,"abstract":"<p>A large number of images are available on the Internet with the growth of social networking services, and many of them are face photos or contain faces. It is necessary to protect the privacy of face images to prevent their malicious use by face image de-identification techniques that make face recognition difficult, which prevent the collection of specific face images using face recognition. In this paper, we propose a face image de-identification method that generates a de-identified image from an input face image by embedding facial features extracted from that of another person into the input face image. We develop the novel framework for embedding facial features into a face image and loss functions based on images and features to de-identify a face image preserving its appearance. Through a set of experiments using public face image datasets, we demonstrate that the proposed method exhibits higher de-identification performance against unknown face recognition models than conventional methods while preserving the appearance of the input face images.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive multiparametric analysis of human deepfake speech recognition 人类深度伪语音识别的多参数综合分析
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-08-30 DOI: 10.1186/s13640-024-00641-4
Kamil Malinka, Anton Firc, Milan Šalko, Daniel Prudký, Karolína Radačovská, Petr Hanáček
{"title":"Comprehensive multiparametric analysis of human deepfake speech recognition","authors":"Kamil Malinka, Anton Firc, Milan Šalko, Daniel Prudký, Karolína Radačovská, Petr Hanáček","doi":"10.1186/s13640-024-00641-4","DOIUrl":"https://doi.org/10.1186/s13640-024-00641-4","url":null,"abstract":"<p>In this paper, we undertake a novel two-pronged investigation into the human recognition of deepfake speech, addressing critical gaps in existing research. First, we pioneer an evaluation of the impact of prior information on deepfake recognition, setting our work apart by simulating real-world attack scenarios where individuals are not informed in advance of deepfake exposure. This approach simulates the unpredictability of real-world deepfake attacks, providing unprecedented insights into human vulnerability under realistic conditions. Second, we introduce a novel metric to evaluate the quality of deepfake audio. This metric facilitates a deeper exploration into how the quality of deepfake speech influences human detection accuracy. By examining both the effect of prior knowledge about deepfakes and the role of deepfake speech quality, our research reveals the importance of these factors, contributes to understanding human vulnerability to deepfakes, and suggests measures to enhance human detection skills.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A method for image–text matching based on semantic filtering and adaptive adjustment 基于语义过滤和自适应调整的图像文本匹配方法
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-08-29 DOI: 10.1186/s13640-024-00639-y
Ran Jin, Tengda Hou, Tao Jin, Jie Yuan, Chenjie Du
{"title":"A method for image–text matching based on semantic filtering and adaptive adjustment","authors":"Ran Jin, Tengda Hou, Tao Jin, Jie Yuan, Chenjie Du","doi":"10.1186/s13640-024-00639-y","DOIUrl":"https://doi.org/10.1186/s13640-024-00639-y","url":null,"abstract":"<p>As image–text matching (a critical task in the field of computer vision) links cross-modal data, it has captured extensive attention. Most of the existing methods intended for matching images and texts explore the local similarity levels between images and sentences to align images with texts. Even though this fine-grained approach has remarkable gains, how to further mine the deep semantics between data pairs and focus on the essential semantics in data remains to be quested. In this work, a new semantic filtering and adaptive approach (FAAR) was proposed to ease the above problem. To be specific, the filtered attention (FA) module selectively focuses on typical alignments with the interference of meaningless comparisons eliminated. Next, the adaptive regulator (AR) further adjusts the attention weights of key segments for filtered regions and words. The superiority of our proposed method was validated by a number of qualitative experiments and analyses on the Flickr30K and MSCOCO data sets.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on facial expression recognition algorithm based on improved MobileNetV3 基于改进的 MobileNetV3 的面部表情识别算法研究
IF 2.4 4区 计算机科学
Eurasip Journal on Image and Video Processing Pub Date : 2024-08-22 DOI: 10.1186/s13640-024-00638-z
Bin Jiang, Nanxing Li, Xiaomei Cui, Qiuwen Zhang, Huanlong Zhang, Zuhe Li, Weihua Liu
{"title":"Research on facial expression recognition algorithm based on improved MobileNetV3","authors":"Bin Jiang, Nanxing Li, Xiaomei Cui, Qiuwen Zhang, Huanlong Zhang, Zuhe Li, Weihua Liu","doi":"10.1186/s13640-024-00638-z","DOIUrl":"https://doi.org/10.1186/s13640-024-00638-z","url":null,"abstract":"<p>Aiming at the problem that face images are easily interfered by occlusion factors in uncontrollable environments, and the complex structure of traditional convolutional neural networks leads to low expression recognition rates, slow network convergence speed, and long network training time, an improved lightweight convolutional neural network is proposed for facial expression recognition algorithm. First, the dilation convolution is introduced into the shortcut connection of the inverted residual structure in the MobileNetV3 network to expand the receptive field of the convolution kernel and reduce the loss of expression features. Then, the channel attention mechanism SENet in the network is replaced by the two-dimensional (channel and spatial) attention mechanism SimAM introduced without parameters to reduce the network parameters. Finally, in the normalization operation, the Batch Normalization of the backbone network is replaced with Group Normalization, which is stable at various batch sizes, to reduce errors caused by processing small batches of data. Experimental results on RaFD, FER2013, and FER2013Plus face expression data sets show that the network reduces the training times while maintaining network accuracy, improves network convergence speed, and has good convergence effects.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信