中国图象图形学报最新文献

筛选
英文 中文
Poisson Noise Reduction with Nonlocal-PCA Hybrid Model in Medical X-ray Images 基于非局部pca混合模型的医用x射线泊松降噪
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.178-184
Daniel Kipele, K. Greyson
{"title":"Poisson Noise Reduction with Nonlocal-PCA Hybrid Model in Medical X-ray Images","authors":"Daniel Kipele, K. Greyson","doi":"10.18178/joig.11.2.178-184","DOIUrl":"https://doi.org/10.18178/joig.11.2.178-184","url":null,"abstract":"The presence of Poisson noise in medical X-ray images leads to degradation of the image quality. The obscured information is required for accurate diagnosis. During X-ray image acquisition process, weak light results into limited number of available photons, which leads into the Poisson noise commonly known as X-ray noise. Currently, the available X-ray noise removal methods have not yet obtained satisfying total denoising results to remove noise from the medical X-ray images. The available techniques tend to show good performance when the image model corresponds to the algorithm’s assumptions used but in general, the denoising algorithms fail to do complete denoise. X-ray image quality could be improved by increasing the X-ray dose value (beyond the maximum medically permissible dose) but the process could be lethal to patients’ health since higher X-ray energy may kill cells due to the effects of higher dose values. In this study, the hybrid model that combines the Poisson Principal Component Analysis (Poisson PCA) with the nonlocal (NL) means denoising algorithm is developed to reduce noise in images. This hybrid model for X-ray noise removal and the contrast enhancement improves the quality of X-ray images and can, thus, be used for medical diagnosis. The performance of the proposed hybrid model was observed by using the standard data and was compared with the standard Poisson algorithms.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76662502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Image Enhancement Based on an Adjustable Intensifier OperatorFuzzy Image Enhancement Based on an Adjustable Intensifier Operator 基于可调增强算子的模糊图像增强
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.146-152
Libao Yang, S. Zenian, R. Zakaria
{"title":"Fuzzy Image Enhancement Based on an Adjustable Intensifier OperatorFuzzy Image Enhancement Based on an Adjustable Intensifier Operator","authors":"Libao Yang, S. Zenian, R. Zakaria","doi":"10.18178/joig.11.2.146-152","DOIUrl":"https://doi.org/10.18178/joig.11.2.146-152","url":null,"abstract":"Fuzzy image enhancement is an important method in the process of image processing. Fuzzy image enhancement includes steps: gray-level fuzzification, modifying membership using intensifier (INT) operator, and obtaining new gray-levels by defuzzification. This paper proposed an adjustable INT operator with parameter k. Firstly, the image’s pixels are divided into two regions by the OTSU method (low and high region), and calculate the pixels’ membership by fuzzification in each region. Then, the INT operator reduce pixels’ membership in the low region and enlarge pixels’ membership in the high region. The parameter k is determined base on the pixel’s location information (neighborhood information), and plays an adjusting role when the INT operator is working. And finally, the result image is obtained by the defuzzification process. In the experiment results, the fuzzy image enhancement with the adjustable intensifier operator achieves a better performance.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85359081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiclass Classification of Paddy Leaf Diseases Using Random Forest Classifier 基于随机森林分类器的水稻叶片病害多类分类
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.195-203
K. Saminathan, B. Sowmiya, Devi M Chithra
{"title":"Multiclass Classification of Paddy Leaf Diseases Using Random Forest Classifier","authors":"K. Saminathan, B. Sowmiya, Devi M Chithra","doi":"10.18178/joig.11.2.195-203","DOIUrl":"https://doi.org/10.18178/joig.11.2.195-203","url":null,"abstract":"With increase in population, improving the quality and quantity of food is essential. Paddy is a vital food crop serving numerous people in various continents of the world. The yield of paddy is affected by numerous factors. Early diagnosis of disease is needed to prevent the plants from successive stage of disease. Manual diagnosis by naked eye is the traditional method widely adopted by farmers to identify leaf diseases. However, when the task involves manual disease diagnosis, problems like the hiring of domain experts, time consumption, and inaccurate results will arise. Inconsistent results may lead to improper treatment of plants. To overcome this problem, automatic disease diagnosis is proposed by researchers. This will help the farmers to accurately diagnose the disease swiftly without the need for expert. This manuscript develops model to classify four types of paddy leaf diseases bacterial blight, blast, tungro and brown spot. To begin with, the image is preprocessed by resizing and conversion to RGB Red, Green and Blue (RGB) and Hue, Saturation and Value (HSV) color space. Segmentation is done. Global features namely: hu moments, Haralick and color histogram are extracted and concatenated. Data is split in to training part and testing part in 70:30 ratios. Images are trained using multiple classifiers like Logistic Regression, Random Forest Classifier, Decision Tree Classifier, K-Nearest Neighbor (KNN) Classifier, Linear Discriminant Analysis (LDA),Support Vector Machine (SVM) and Gaussian Naive Bayes. This study reports Random Forest classifier as the best classifier. The Accuracy of the proposed model gained 92.84% after validation and 97.62% after testing using paddy disordered samples. 10 fold cross validation is performed. Performance of classification algorithms is measured using confusion matrix with precision, recall, F1- score and support as parameters.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84507094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo Vision Based Localization of Handheld Controller in Virtual Reality for 3D Painting Using Inertial System 基于立体视觉的惯性系统三维绘画虚拟现实手持控制器定位
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.127-131
A. Saif, Z. R. Mahayuddin
{"title":"Stereo Vision Based Localization of Handheld Controller in Virtual Reality for 3D Painting Using Inertial System","authors":"A. Saif, Z. R. Mahayuddin","doi":"10.18178/joig.11.2.127-131","DOIUrl":"https://doi.org/10.18178/joig.11.2.127-131","url":null,"abstract":"Google Tilt Brush is expensive for virtual drawing which needs further improvement on the functionalities of mechanisms rather than implementation expects addressed in this research. Several issues are addressed by this research in this context, i.e., noise removal from sensor data, double integration-based drift issues and cost. Recently, available smart phones do not have the ability to perform drawing within artificial settings handling cardboard and daydream of google without purchasing Oculus Rift and HTC Vive (Virtual Reality Headset) because of expensiveness for large number of users. In addition, various extrinsic hardwares, i.e., satellite localization hardware and ultrasonic localization applications are not used for drawing in virtual reality. Proposed methodology implemented extended Kalman filter and Butterworth filter to perform positioning using six degree of freedom using Microelectromechanical Applications (MEMS) software data. A stereo visual method using Simultaneous Localization and Mapping (SLAM) is used to estimate the measurement for positioning implicating mobile phone (i.e., android platform) for the hardware system to estimate drift. This research implemented Google Virtual Reality application settings Kit with Unity3D engine. Experimentation validation states that proposed method can perform painting using virtual reality hardware integrated with controller software implicating smartphone mobile without using extrinsic controller device, i.e., Oculus Rift and HTC Vive with satisfactory accuracy.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81339470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid Analysis of Thorax Images for the Detection of Viral Infections 胸腔图像快速分析检测病毒感染
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.115-120
R. Radtke, Alexander Jesser
{"title":"Rapid Analysis of Thorax Images for the Detection of Viral Infections","authors":"R. Radtke, Alexander Jesser","doi":"10.18178/joig.11.2.115-120","DOIUrl":"https://doi.org/10.18178/joig.11.2.115-120","url":null,"abstract":"At the end of December 2019, a person in the Chinse city Wuhan was probably infected for the first time with the novel SARS-CoV-2 virus. In order to be able to react as quickly as possible after infection rapid diagnostic measures are of the utmost importance so that medical treatment can be taken at an early stage. An imageprocessing algorithm is presented using chest X-rays to determine whether a lung infection has a viral or a bacterial cause. In comparison to other more complicated evaluation methods, focus was put on using a simple algorithm by using the Canny algorithm for edge detection of infected areas of the lung tissue instead of complex deep learning processes. Main advantage here is that the method is portable to many different computer systems with little effort and does not need huge computing power. This should contribute to a faster diagnosis of SARS-CoV-2 virus-infection, especially in medically underdeveloped areas.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79292263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Emotion Recognition through Facial Expressions 基于深度学习的面部表情情感识别
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.140-145
Sarunya Kanjanawattana, Piyapong Kittichaiwatthana, Komsan Srivisut, Panchalee Praneetpholkrang
{"title":"Deep Learning-Based Emotion Recognition through Facial Expressions","authors":"Sarunya Kanjanawattana, Piyapong Kittichaiwatthana, Komsan Srivisut, Panchalee Praneetpholkrang","doi":"10.18178/joig.11.2.140-145","DOIUrl":"https://doi.org/10.18178/joig.11.2.140-145","url":null,"abstract":"Nowadays, humans can communicate easily with others by recognizing speech and text characters, particularly facial expressions. In human communication, it is critical to comprehend their emotion or implicit expression. Indeed, facial expression recognition is vital for analyzing the emotions of conversation partners, which can contribute to a series of matters, including mental health consulting. This technique enables psychiatrists to select appropriate questions based on their patients’ current emotional state. The purpose of this study was to develop a deep learningbased model for detecting and recognizing emotions on human faces. We divided the experiment into two parts: Faster R-CNN and mini-Xception architecture. We concentrated on four distinct emotional states: angry, sad, happy, and neutral. Both models implemented using the Faster R-CNN and the mini-Xception architectures were compared during the evaluation process. The findings indicate that the mini-Xception architecture model produced a better result than the Faster R-CNN. This study will be expanded in the future to include the detection of complex emotions such as sadness.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89893365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation of Facial Palsy Using Cycle GAN with Skip-Layer Excitation Module and Self-Supervised Discriminator 基于跳跃层激励模块和自监督鉴别器的循环GAN仿真面瘫
中国图象图形学报 Pub Date : 2023-06-01 DOI: 10.18178/joig.11.2.132-139
Takato Sakai, M. Seo, N. Matsushiro, Yen-Wei Chen
{"title":"Simulation of Facial Palsy Using Cycle GAN with Skip-Layer Excitation Module and Self-Supervised Discriminator","authors":"Takato Sakai, M. Seo, N. Matsushiro, Yen-Wei Chen","doi":"10.18178/joig.11.2.132-139","DOIUrl":"https://doi.org/10.18178/joig.11.2.132-139","url":null,"abstract":"The Yanagihara method is used to evaluate facial nerve palsy based on visual examinations by physicians. Examples of scored images are important for educational purposes and as references, however, due to patient privacy concern, actual facial images of real patients cannot be used for educational purposes. In this paper, we propose a solution to this problem by generating facial images of a virtual patient with facial nerve palsy, that can be shared and utilized by physicians. To reproduce the patient facial expression in a public face image, we propose a method to generate a swapped face image using the improved Cycle Generative Adversarial Networks (Cycle GAN) with a skiplayer excitation module and a self-supervised discriminator. Experimental results demonstrate that the proposed model can generate more coherent swapped faces that are similar to the public face identity and patient facial expressions. The proposed method also improves the quality of generated swapped face images while keeping them identical to the source (genuine) face image.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85620993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded Graph Convolution Approach for Nuclei Detection in Histopathology Images 组织病理图像中核检测的级联图卷积方法
中国图象图形学报 Pub Date : 2023-03-01 DOI: 10.18178/joig.11.1.15-20
Sachin Bahade, Michael Edwards, Xianghua Xie
{"title":"Cascaded Graph Convolution Approach for Nuclei Detection in Histopathology Images","authors":"Sachin Bahade, Michael Edwards, Xianghua Xie","doi":"10.18178/joig.11.1.15-20","DOIUrl":"https://doi.org/10.18178/joig.11.1.15-20","url":null,"abstract":"Nuclei detection in histopathology images of cancerous tissue stained with conventional hematoxylin and eosin stain is a challenging task due to the complexity and diversity of cell data. Deep learning techniques have produced encouraging results in the field of nuclei detection, where the main emphasis is on classification and regressionbased methods. Recent research has demonstrated that regression-based techniques outperform classification. In this paper, we propose a classification model based on graph convolutions to classify nuclei, and similar models to detect nuclei using cascaded architecture. With nearly 29,000 annotated nuclei in a large dataset of cancer histology images, we evaluated the Convolutional Neural Network (CNN) and Graph Convolutional Networks (GCN) based approaches. Our findings demonstrate that graph convolutions perform better with a cascaded GCN architecture and are more stable than centre-of-pixel approach. We have compared our twofold evaluation quantitative results with CNN-based models such as Spatial Constrained Convolutional Neural Network (SC-CNN) and Centre-of-Pixel Convolutional Neural Network (CP-CNN). We used two different loss functions, binary cross-entropy and focal loss function, and also investigated the behaviour of CP-CNN and GCN models to observe the effectiveness of CNN and GCN operators. The compared quantitative F1 score of cascaded-GCN shows an improvement of 6% compared to state-of-the-art methods.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87525380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Breast Cancer Detection Using Image Processing and Machine Learning 使用图像处理和机器学习的乳腺癌检测
中国图象图形学报 Pub Date : 2023-03-01 DOI: 10.18178/joig.11.1.1-8
Z. Q. Habeeb, B. Vuksanovic, Imad Al-Zaydi
{"title":"Breast Cancer Detection Using Image Processing and Machine Learning","authors":"Z. Q. Habeeb, B. Vuksanovic, Imad Al-Zaydi","doi":"10.18178/joig.11.1.1-8","DOIUrl":"https://doi.org/10.18178/joig.11.1.1-8","url":null,"abstract":"Different breast cancer detection systems have been developed to help clinicians analyze screening mammograms. Breast cancer has been increasing gradually so scientists work to develop new methods to reduce the risks of this life-threatening disease. Convolutional Neural Networks (CNNs) have shown much promise In the field of medical imaging because of recent developments in deep learning. However, CNN’s based methods have been restricted due to the small size of the few public breast cancer datasets. This research has developed a new framework and introduced it to detect breast cancer. This framework utilizes Convolutional Neural Networks (CNNs) and image processing to achieve its goal because CNNs have been an important success in image recognition, reaching human performance. An efficient and fast CNN pre-trained object detector called RetinaNet has been used in this research. RetinaNet is an uncomplicated one-stage object detector. A two-stage transfer learning has been used with the selected detector to improve the performance. RetinaNet model is initially trained with a general-purpose dataset called COCO dataset. The transfer learning is then used to apply the RetinaNet model to another dataset of mammograms called the CBIS-DDSM dataset. Finally, the second transfer learning is used to test the RetinaNet model onto a small dataset of mammograms called the INbreast dataset. The results of the proposed two-stage transfer learning (RetinaNet → CBIS-DDSM → INbreast) are better than the other state-of-the-art methods on the public INbreast dataset. Furthermore, the True Positive Rate (TPR) is 0.99 ± 0.02 at 1.67 False Positives per Image (FPPI), which is better than the one-stage transfer learning with a TPR of 0.94 ± 0.02 at 1.67 FPPI.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83824098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robust Dual Digital Watermark Applied to Antique Digitized Cinema Images: Resistant to Print-Scan Attack 鲁棒双数字水印应用于古董数字化电影图像:抗打印扫描攻击
中国图象图形学报 Pub Date : 2023-03-01 DOI: 10.18178/joig.11.1.61-71
L. Reyes-Ruiz, duardo Fragoso-Navarro, F. Garcia-Ugalde, O. Juarez-Sandoval, M. Cedillo-Hernández, M. Nakano-Miyatake
{"title":"Robust Dual Digital Watermark Applied to Antique Digitized Cinema Images: Resistant to Print-Scan Attack","authors":"L. Reyes-Ruiz, duardo Fragoso-Navarro, F. Garcia-Ugalde, O. Juarez-Sandoval, M. Cedillo-Hernández, M. Nakano-Miyatake","doi":"10.18178/joig.11.1.61-71","DOIUrl":"https://doi.org/10.18178/joig.11.1.61-71","url":null,"abstract":"Nowadays, advances in information and communication technologies along with easy access to electronic devices such as smartphones have achieved an agile and efficient storing, edition, and distribution of digital multimedia files. However, lack of regulation has led to several problems associated with intellectual property authentication and copyright protection. Furthermore, the problem becomes complex in a scenario of illegal printed exploitation, which involves printing and scanning processes. To solve these problems, several digital watermarking in combination with cryptographic algorithms has been proposed. In this paper, a strategy of robust watermarking is defined consisting of the administration and detection of unauthorized use of digitized cinematographic images from Mexican cultural heritage. The proposed strategy is based on the combination of two types of digital watermarking, one of visible-camouflaged type based on spatial domain and another of invisible type based on frequency domain, together with a particle swarm optimization. The experimental results show the high performance of the proposed algorithm faced to printing-scanning processes or digital-analogue attack, and common image geometric and image processing attacks such as JPEG compression. Additionally, the imperceptibility of the watermark is evaluated by PSNR and compared with other previously proposed algorithms.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81786159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信