2018 Digital Image Computing: Techniques and Applications (DICTA)最新文献

筛选
英文 中文
Mapping of Rice Varieties with Sentinel-2 Data via Deep CNN Learning in Spectral and Time Domains 基于谱域和时域深度CNN学习的Sentinel-2数据水稻品种映射
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615872
Yiqing Guo, X. Jia, D. Paull
{"title":"Mapping of Rice Varieties with Sentinel-2 Data via Deep CNN Learning in Spectral and Time Domains","authors":"Yiqing Guo, X. Jia, D. Paull","doi":"10.1109/DICTA.2018.8615872","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615872","url":null,"abstract":"Generating rice variety distribution maps with remote sensing image time series provides meaningful information for intelligent management of rice farms and precise budgeting of irrigation water. However, as different rice varieties share highly similar spectral/temporal patterns, distinguishing one variety from another is highly challenging. In this study, a deep convolutional neural network (deep CNN) is constructed in both spectral and time domains. The purpose is to learn the fine features of each rice variety in terms of its spectral reflectance characteristics and growing phenology, which is a new attempt aiming for agriculture intelligence. An experiment was conducted at a major rice planting area in southwest New South Wales, Australia, during the 2016–17 rice growing season. Based on a ground reference map of rice variety distribution, more than one million labelled samples were collected. Five rice varieties currently grown in the study area are investigated and they are Reiziq, Sherpa, Topaz, YRM 70, and Langi. A time series of multitemporal remote sensing images recorded by the Multispectral Instrument (MSI) on-board the Sentinel-2A satellite was used as inputs. These images covered the entire rice growing season from November 2016 to May 2017. Experimental results showed that a good overall accuracy of 92.87% was achieved with the proposed approach, outperforming a standard support vector machine classifier that produced an accuracy of 57.49%. The Sherpa variety showed the highest producer's accuracy (98.46%), while the highest user's accuracy was observed for the Reiziq variety (97.93%). The results obtained with the proposed deep CNN learning provide the prospect of applying remote sensing image time series for rice variety mapping in an operational context in future.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127398143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Animal Call Recognition with Acoustic Indices: Little Spotted Kiwi as a Case Study 动物叫声识别声学指数:小斑点猕猴桃为例研究
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615857
Hongxiao Gan, M. Towsey, Yuefeng Li, Jinglan Zhang, P. Roe
{"title":"Animal Call Recognition with Acoustic Indices: Little Spotted Kiwi as a Case Study","authors":"Hongxiao Gan, M. Towsey, Yuefeng Li, Jinglan Zhang, P. Roe","doi":"10.1109/DICTA.2018.8615857","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615857","url":null,"abstract":"Long-duration recordings of the natural environment are very useful in monitoring of animal diversity. After accumulating weeks or even months of recordings, ecologists need an efficient tool to recognize species in those recordings. Automated species recognizers are developed to interpret field-collected recordings and quickly identify species. However, the repetitive work of designing and selecting features for different species is becoming a serious problem for ecologists. This situation creates a demand for generic recognizers that perform well on multiple animal calls. Meanwhile, acoustic indices are proposed to summarize the structure and distribution of acoustic energy in natural environment recordings. They are designed to assess the acoustic activity of animal habitats and do not have discrimination against any species. That characteristic makes them natural generic features for recognizers. In this study, we explore the potential of acoustic indices being generic features and build a kiwi call recognizer with them as a case study. We proposed a kiwi call recognizer built with a Multilayer Perceptron (MLP) classifier and acoustic index features. Experimental results on 13 hours of kiwi call recordings show that our recognizer performs well, in terms of precision, recall and F1 measure. This study shows that acoustic indices have the potential of being generic features that can discriminate multiple animal calls.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"559 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127676912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Edge Position Difference and Pixel Correlation for Aligning Stereo-Camera Generated 3D Scans 利用边缘位置差和像素相关对齐立体相机生成的三维扫描
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615836
Deepak Rajamohan, M. Pickering, M. Garratt
{"title":"Using Edge Position Difference and Pixel Correlation for Aligning Stereo-Camera Generated 3D Scans","authors":"Deepak Rajamohan, M. Pickering, M. Garratt","doi":"10.1109/DICTA.2018.8615836","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615836","url":null,"abstract":"Projection of a textured 3D scan, with a fixed scale, will spatially align with the 2D image of the scanned scene only at an unique pose of the scan. If misaligned, the true 3D alignment can be estimated using information from a 2D-2D registration process that minimizes an appropriate error criteria by penalizing mismatch between the overlapping images. Scan data from complicated real-world scenes poses a challenging registration problem due to the tendency of the optimization procedure to become trapped in local minima. In addition, the 3D scan from a stereo camera is of very highresolution and shows mild geometrical distortion adding to the difficulty. This work presents a new registration process using a similarity measure named Edge Position Difference (EPD) combined with a pixel based correlation similarity measure. Together, the technique is able to show consistent and robust 3D-2D registration performance using stereo data, showcasing the potential for extending the technique for practical large scale mapping applications.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129639494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bay Lobsters Moulting Stage Analysis Based on High-Order Texture Descriptor 基于高阶纹理描述符的海湾龙虾换壳阶段分析
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615832
M. Asif, Yongsheng Gao, Jun Zhou
{"title":"Bay Lobsters Moulting Stage Analysis Based on High-Order Texture Descriptor","authors":"M. Asif, Yongsheng Gao, Jun Zhou","doi":"10.1109/DICTA.2018.8615832","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615832","url":null,"abstract":"In this paper, we introduce the world's first method to automatically classify the moulting stage of Bay lobsters, formally known as Thenus orientális, in a controlled environment. Our classification approach only requires top view images of exoskeleton of bay lobsters. We analyzed the texture of exoskeleton to categorize into normal, moulting stage, and freshly moulted classes. To meet the efficiency and robustness requirements of production platform, we leverage traditional approach such as Local Binary Pattern and Local Derivative Pattern with enhanced encoding scheme for underwater imagery. We also build a dataset of 315 bay lobster images captured at the controlled under water environment. Experimental results on this dataset demonstrated that the proposed method can effectively classify bay lobsters with a high accuracy.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128928937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Kernel Support Vector Machines and Convolutional Neural Networks 核支持向量机与卷积神经网络
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615840
Shihao Jiang, R. Hartley, Basura Fernando
{"title":"Kernel Support Vector Machines and Convolutional Neural Networks","authors":"Shihao Jiang, R. Hartley, Basura Fernando","doi":"10.1109/DICTA.2018.8615840","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615840","url":null,"abstract":"Convolutional Neural Networks (CNN) have achieved great success in various computer vision tasks due to their strong ability in feature extraction. The trend of development of CNN architectures is to increase their depth so as to increase their feature extraction ability. Kernel Support Vector Machines (SVM), on the other hand, are known to give optimal separating surfaces by their ability to automatically select support vectors and perform classification in higher dimensional spaces. We investigate the idea of combining the two such that best of both worlds can be achieved and a more compact model can perform as well as deeper CNNs. In the past, attempts have been made to use CNNs to extract features from images and then classify with a kernel SVM, but this process was performed in two separate steps. In this paper, we propose one single model where a CNN and a kernel SVM are integrated together and can be trained end-to-end. In particular, we propose a fully-differentiable Radial Basis Function (RBF) layer, where it can be seamless adapted to a CNN environment and forms a better classifier compared to the normal linear classifier. Due to end-to-end training, our approach allows the initial layers of the CNN to extract features more adapted to the kernel SVM classifier. Our experiments demonstrate that the hybrid CNN-kSVM model gives superior results to a plain CNN model, and also performs better than the method where feature extraction and classification are performed in separate stages, by a CNN and a kernel SVM respectively.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123033327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual View Quality Enhancement using Side View Temporal Modelling Information for Free Viewpoint Video 使用侧面视图时间建模信息增强免费视点视频的虚拟视图质量
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615827
D. M. Rahaman, M. Paul, N. J. Shoumy
{"title":"Virtual View Quality Enhancement using Side View Temporal Modelling Information for Free Viewpoint Video","authors":"D. M. Rahaman, M. Paul, N. J. Shoumy","doi":"10.1109/DICTA.2018.8615827","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615827","url":null,"abstract":"Virtual viewpoint video needs to be synthesised from adjacent reference viewpoints to provide immersive perceptual 3D viewing experience of a scene. View synthesised techniques suffer poor rendering quality due to holes created by occlusion in the warping process. Currently, spatial and temporal correlation of texture images and depth maps are exploited to improve the quality of the final synthesised view. Due to the low spatial correlation at the edge between foreground and background pixels, spatial correlation e.g. inpainting and inverse mapping (IM) techniques cannot fill holes effectively. Conversely, a temporal correlation among already synthesised frames through learning by Gaussian mixture modelling (GMM) fill missing pixels in occluded areas efficiently. In this process, there are no frames for GMM learning when the user switches view instantly. To address the above issues, in the proposed view synthesis technique, we apply GMM on the adjacent reference viewpoint texture images and depth maps to generate a most common frame in a scene (McFIS). Then, texture McFIS is warped into the target viewpoint by using depth McFIS and both warped McFISes are merged. Then, we utilize the number of GMM models to refine pixel intensities of the synthesised view by using a weighting factor between the pixel intensities of the merged McFIS and the warped images. This technique provides a better pixel correspondence and improves 0.58∼0.70dB PSNR compared to the IM technique.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"41 163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131605778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Strategies for Merging Hyperspectral Data of Different Spectral and Spatial Resoultion 不同光谱和空间分辨率高光谱数据合并策略
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615875
R. Illmann, M. Rosenberger, G. Notni
{"title":"Strategies for Merging Hyperspectral Data of Different Spectral and Spatial Resoultion","authors":"R. Illmann, M. Rosenberger, G. Notni","doi":"10.1109/DICTA.2018.8615875","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615875","url":null,"abstract":"Increasing applications for hyperspectral measurement make increasing demands on the handling of big measurement data. Push broom imaging is a promising measurement technique for many applications. The combined registration of hyperspectral and spatial data reveal a lot of information about the measurement object. An exemplary well-known further processing technique is to extract feature vectors from such a dataset. For increasing quality and quantity of possible information, it is advantageously to have a spectral wide range dataset. Nevertheless, different spectral data mainly needs different imaging systems. A major problem in using hyperspectral data from different hyperspectral imaging systems is the combination of those to a wide range data set, called spectral cube. The aim of this work is to show which methods are principal conceivable and usable under different circumstances for merging such datasets with a profound analytical view. In addition, some work that was done in the theory and the design of a calibration model prototype is included.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Single Hierarchical Network for Face, Action Unit and Emotion Detection 一种用于人脸、动作单元和情感检测的单一层次网络
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615852
Shreyank Jyoti, Garima Sharma, Abhinav Dhall
{"title":"A Single Hierarchical Network for Face, Action Unit and Emotion Detection","authors":"Shreyank Jyoti, Garima Sharma, Abhinav Dhall","doi":"10.1109/DICTA.2018.8615852","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615852","url":null,"abstract":"The deep neural network shows a consequential performance for a set of specific tasks. A system designed for some correlated task altogether can be feasible for ‘in the wild’ applications. This paper proposes a method for the face localization, Action Unit (AU) and emotion detection. The three different tasks are performed by a simultaneous hierarchical network which exploits the way of learning of neural networks. Such network can represent more relevant features than the individual network. Due to more complex structures and very deep networks, the deployment of neural networks for real life applications is a challenging task. The paper focuses to find an efficient trade-off between the performance and the complexity of the given tasks. This is done by exploring the advantages of optimization of the network for the given tasks by using separable convolutions, binarization and quantization. Four different databases (AffectNet, EmotioNet, RAF-DB and WiderFace) are used to evaluate the performance of our proposed approach by having a separate task specific database.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116671814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Detecting Splicing and Copy-Move Attacks in Color Images 检测彩色图像中的拼接和复制移动攻击
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615874
Mohammad Manzurul Islam, G. Karmakar, J. Kamruzzaman, Manzur Murshed, G. Kahandawa, N. Parvin
{"title":"Detecting Splicing and Copy-Move Attacks in Color Images","authors":"Mohammad Manzurul Islam, G. Karmakar, J. Kamruzzaman, Manzur Murshed, G. Kahandawa, N. Parvin","doi":"10.1109/DICTA.2018.8615874","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615874","url":null,"abstract":"Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116824616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Long-Term Recurrent Predictive Model for Intent Prediction of Pedestrians via Inverse Reinforcement Learning 基于逆强化学习的行人意图预测长期循环模型
2018 Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2018-12-01 DOI: 10.1109/DICTA.2018.8615854
Khaled Saleh, M. Hossny, S. Nahavandi
{"title":"Long-Term Recurrent Predictive Model for Intent Prediction of Pedestrians via Inverse Reinforcement Learning","authors":"Khaled Saleh, M. Hossny, S. Nahavandi","doi":"10.1109/DICTA.2018.8615854","DOIUrl":"https://doi.org/10.1109/DICTA.2018.8615854","url":null,"abstract":"Recently, the problem of intent and trajectory prediction of pedestrians in urban traffic environments has got some attention from the intelligent transportation research community. One of the main challenges that make this problem even harder is the uncertainty exists in the actions of pedestrians in urban traffic environments, as well as the difficulty in inferring their end goals. In this work, we are proposing a data-driven framework based on Inverse Reinforcement Learning (IRL) and the bidirectional recurrent neural network architecture (B-LSTM) for long-term prediction of pedestrians' trajectories. We evaluated our framework on real-life datasets for agent behavior modeling in traffic environments and it has achieved an overall average displacement error of only 2.93 and 4.12 pixels over 2.0 secs and 3.0 secs ahead prediction horizons respectively. Additionally, we compared our framework against other baseline models based on sequence prediction models only. We have outperformed these models with the lowest margin of average displacement error of more than 5 pixels.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115540297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信