Real-Time Image and Video Processing最新文献

筛选
英文 中文
Multi-resolution model-based traffic sign detection and tracking 基于多分辨率模型的交通标志检测与跟踪
Real-Time Image and Video Processing Pub Date : 2012-06-01 DOI: 10.1117/12.924884
Javier Marinas, L. Salgado, M. Camplani
{"title":"Multi-resolution model-based traffic sign detection and tracking","authors":"Javier Marinas, L. Salgado, M. Camplani","doi":"10.1117/12.924884","DOIUrl":"https://doi.org/10.1117/12.924884","url":null,"abstract":"ABSTRACT In this paper we propose an innovative approach to tackle th e problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraint s, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, fo llowed by a region analysis strategy, wher e spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, wh ich are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach. Keywords: multi-resolution, inhibition areas, Ka lman filter, real-time processing.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131417864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks 无线视觉传感器网络中降低通信能量的双级图像压缩方法选择
Real-Time Image and Video Processing Pub Date : 2012-06-01 DOI: 10.1117/12.923716
Khursheed Khursheed, Muhammad Imran, Naeem Ahmad, M. O’nils
{"title":"Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks","authors":"Khursheed Khursheed, Muhammad Imran, Naeem Ahmad, M. O’nils","doi":"10.1117/12.923716","DOIUrl":"https://doi.org/10.1117/12.923716","url":null,"abstract":"Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board \u0000computation unit, communication component and energy source. Compared to the traditional wireless sensor \u0000network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on \u0000two dimensional data (images) which requires higher processing power and communication bandwidth. \u0000Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget \u0000in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited \u0000availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server \u0000should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and \u0000requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be \u0000effective in reducing communication cost in WVSN. In this paper, we have compared the compression \u0000efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the \u0000compression algorithms which can efficiently compress bi-level images and their computational complexity is \u0000suitable for computational platform used in WVSNs. These results can be used as a road map for selection of \u0000compression methods for different sets of constraints in WVSN.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133828406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
2000 Fps Multi-object Tracking Based on Color Histogram 基于颜色直方图的2000 Fps多目标跟踪
Real-Time Image and Video Processing Pub Date : 2012-06-01 DOI: 10.1117/12.921860
Qingyi Gu, T. Takaki, I. Ishii
{"title":"2000 Fps Multi-object Tracking Based on Color Histogram","authors":"Qingyi Gu, T. Takaki, I. Ishii","doi":"10.1117/12.921860","DOIUrl":"https://doi.org/10.1117/12.921860","url":null,"abstract":"In this study, we develop a real-time, color histogram-based tracking system for multiple color-patterned objects \u0000in a 512×512 image at 2000 fps. Our system can simultaneously extract the positions, areas, orientation angles, \u0000and color histograms of multiple objects in an image using the hardware implementation of a multi-object, \u0000color histogram extraction circuit module on a high-speed vision platform. It can both label multiple objects \u0000in an image consisting of connected components and calculate their moment features and 16-bin hue-based \u0000color histograms using cell-based labeling. We demonstrate the performance of our system by showing several \u0000experimental results: (1) tracking of multiple color-patterned objects on a plate rotating at 16 rps, and (2) \u0000tracking of human hand movement with two color-patterned drinking bottles.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130502182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real-time FPGA implementation of recursive wavelet packet transform 实时FPGA实现的递归小波包变换
Real-Time Image and Video Processing Pub Date : 2012-06-01 DOI: 10.1117/12.924156
Vanishree Gopalakrishna, N. Kehtarnavaz, Chandrasekhar Patlolla, M. F. Carlsohn
{"title":"Real-time FPGA implementation of recursive wavelet packet transform","authors":"Vanishree Gopalakrishna, N. Kehtarnavaz, Chandrasekhar Patlolla, M. F. Carlsohn","doi":"10.1117/12.924156","DOIUrl":"https://doi.org/10.1117/12.924156","url":null,"abstract":"To address the computational complexity of the wavelet packet transform of a moving window with a large amount of \u0000overlap between consecutive windows, the recursive computation approach was introduced previously1. In this work, \u0000this approach is extended to 2D or images. In addition, the FPGA implementation of the recursive approach for updating \u0000wavelet coefficients is performed by using the LabVIEW FPGA module. This programming approach is graphical and \u0000requires no knowledge of relatively involved hardware description languages. A number of optimization steps including \u0000both filter and wavelet stage pipelining are taken in order to achieve a real-time throughput. It is shown that the recursive \u0000approach reduces the computational complexity significantly as compared to the non-recursive or the classical \u0000computation of wavelet packet transform. For example, the number of multiplications is reduced by a factor of 3 for a \u00003-stage 1D transform of moving windows containing 128 samples and by a factor of 12 for a 3-stage 2D transform of \u0000moving window blocks of size 16×16 with 50% overlap.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130410183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Capturing reading patterns through a real-time smart camera iris tracking system 通过实时智能摄像头虹膜跟踪系统捕捉阅读模式
Real-Time Image and Video Processing Pub Date : 2012-05-04 DOI: 10.1117/12.922875
M. Mehrübeoglu, Evan Ortlieb, L. McLauchlan, L. Pham
{"title":"Capturing reading patterns through a real-time smart camera iris tracking system","authors":"M. Mehrübeoglu, Evan Ortlieb, L. McLauchlan, L. Pham","doi":"10.1117/12.922875","DOIUrl":"https://doi.org/10.1117/12.922875","url":null,"abstract":"A real-time iris detection and tracking algorithm has been implemented on a smart camera using LabVIEW graphical \u0000programming tools. The program detects the eye and finds the center of the iris, which is recorded and stored in \u0000Cartesian coordinates. In subsequent video frames, the location of the center of the iris corresponding to the previously \u0000detected eye is computed and recorded for a desired period of time, creating a list of coordinates representing the \u0000moving iris center location across image frames. We present an application for the developed smart camera iris tracking \u0000system that involves the assessment of reading patterns. The purpose of the study is to identify differences in reading \u0000patterns of readers at various levels to eventually determine successful reading strategies for improvement. The readers \u0000are positioned in front of a computer screen with a fixed camera directed at the reader's eyes. The readers are then asked \u0000to read preselected content on the computer screen, one comprising a traditional newspaper text and one a Web page. \u0000The iris path is captured and stored in real-time. The reading patterns are examined by analyzing the path of the iris \u0000movement. In this paper, the iris tracking system and algorithms, application of the system to real-time capture of \u0000reading patterns, and representation of 2D/3D iris track are presented with results and recommendations.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133702440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Fast repurposing of high-resolution stereo video content for mobile use 快速重新利用高分辨率立体视频内容的移动使用
Real-Time Image and Video Processing Pub Date : 2012-05-04 DOI: 10.1117/12.924508
Ali Karaoglu, Bong-Ho Lee, A. Boev, W. Cheong, A. Gotchev
{"title":"Fast repurposing of high-resolution stereo video content for mobile use","authors":"Ali Karaoglu, Bong-Ho Lee, A. Boev, W. Cheong, A. Gotchev","doi":"10.1117/12.924508","DOIUrl":"https://doi.org/10.1117/12.924508","url":null,"abstract":"3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D \u0000mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, \u0000preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo \u0000disparity, as it should deliver realistic, pleasant and harmless 3D perception. \u0000In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of \u0000the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the \u0000disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped \u0000content. \u0000The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D \u0000content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input \u0000stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, \u0000max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. \u0000Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale \u0000of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and \u0000letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using \u0000spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient \u0000computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing \u0000filter with the throughput of the target display.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128756124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Complexity analysis of vision functions for implementation of wireless smart cameras using system taxonomy 基于系统分类法的无线智能摄像机视觉功能复杂度分析
Real-Time Image and Video Processing Pub Date : 2012-05-01 DOI: 10.1117/12.923797
Muhammad Imran, Khursheed Khursheed, Naeem Ahmad, Abdul Waheed Malik, M. O’nils, N. Lawal
{"title":"Complexity analysis of vision functions for implementation of wireless smart cameras using system taxonomy","authors":"Muhammad Imran, Khursheed Khursheed, Naeem Ahmad, Abdul Waheed Malik, M. O’nils, N. Lawal","doi":"10.1117/12.923797","DOIUrl":"https://doi.org/10.1117/12.923797","url":null,"abstract":"There are a number of challenges caused by the large amount of data and limited resources when implementing vision \u0000systems on wireless smart cameras using embedded platforms. Generally, the common challenges include limited \u0000memory, processing capability, the power consumption in the case of battery operated systems, and bandwidth. It is \u0000usual for research in this field to focus on the development of a specific solution for a particular problem. In order to \u0000implement vision systems on an embedded platform, the designers must firstly investigate the resource requirements for \u0000a design and, indeed, failure to do this may result in additional design time and costs so as to meet the specifications. \u0000There is a requirement for a tool which has the ability to predict the resource requirements for the development and \u0000comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a \u0000system taxonomy, which shows that the majority of vision systems for wireless smart cameras are common and these \u0000focus on object detection, analysis and recognition. In this paper, we have investigated the arithmetic complexity and \u0000memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To \u0000demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that \u0000complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. \u0000The study will assist researchers/designers to predict the resource requirements for different class of vision systems, \u0000implemented on wireless smart cameras, in a reduced time and which will involve little effort. This in turn will make the \u0000comparison and generalization of solutions simple for wireless smart cameras.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122116327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
GPU-based real-time structured light 3D scanner at 500 fps 基于gpu的实时结构光3D扫描仪,帧率为500 fps
Real-Time Image and Video Processing Pub Date : 2012-05-01 DOI: 10.1117/12.922568
Hao Gao, T. Takaki, I. Ishii
{"title":"GPU-based real-time structured light 3D scanner at 500 fps","authors":"Hao Gao, T. Takaki, I. Ishii","doi":"10.1117/12.922568","DOIUrl":"https://doi.org/10.1117/12.922568","url":null,"abstract":"In this study, we develop a real-time, structured light 3D scanner that can output 3D video of 512×512 pixels at \u0000500 fps using a GPU-based, high-speed vision system synchronized with a high-speed DLP projector. Our 3D \u0000scanner projects eight pairs of positive and negative image patterns with 8-bit gray code on the measurement \u0000objects at 1000 fps. Synchronized with the high-speed vision platform, these images are simultaneously captured \u0000at 1000 fps and processed in real time for 3D image generation at 500 fps by introducing parallel pixel processing \u0000on a NVIDIA Tesla 1060 GPU board. Several experiments are performed for high-speed 3D objects that undergo \u0000sudden 3D shape deformation.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127830183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Real-time shrinkage studies in photopolymer films using holographic interferometry 用全息干涉法研究光聚合物薄膜的实时收缩
Real-Time Image and Video Processing Pub Date : 2012-05-01 DOI: 10.1117/12.922413
M. Moothanchery, I. Naydenova, V. Bavigadda, S. Martin, V. Toal
{"title":"Real-time shrinkage studies in photopolymer films using holographic interferometry","authors":"M. Moothanchery, I. Naydenova, V. Bavigadda, S. Martin, V. Toal","doi":"10.1117/12.922413","DOIUrl":"https://doi.org/10.1117/12.922413","url":null,"abstract":"Polymerisation induced shrinkage is one of the main reasons why photopolymer materials are not more widely used for \u0000holographic applications. The aim of this study is to evaluate the shrinkage in an acrylamide photopolymer layer during \u0000holographic recording using holographic interferometry. Shrinkage in photopolymer layers can be measured by real time \u0000capture of holographic interferograms during holographic recording. Interferograms were captured using a CMOS \u0000camera at regular intervals. The optical path length change and hence the shrinkage were determined from the captured \u0000fringe patterns. It was observed that the photopolymer layer shrinkage is in the order of 3.5%.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130300795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A contourlet transform based algorithm for real-time video encoding 基于contourlet变换的实时视频编码算法
Real-Time Image and Video Processing Pub Date : 2012-05-01 DOI: 10.1117/12.924327
Stamos Katsigiannis, Georgios Papaioannou, D. Maroulis
{"title":"A contourlet transform based algorithm for real-time video encoding","authors":"Stamos Katsigiannis, Georgios Papaioannou, D. Maroulis","doi":"10.1117/12.924327","DOIUrl":"https://doi.org/10.1117/12.924327","url":null,"abstract":"In recent years, real-time video communication over the internet has been widely utilized for applications like video \u0000conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding \u0000algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and \u0000transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet \u0000Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just \u0000dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy \u0000and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it \u0000does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly \u0000advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually \u0000encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The \u0000proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is \u0000enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding \u0000times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that \u0000demand real-time performance, along with the highest visual quality possible for each user. Through the presented \u0000performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves \u0000better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a \u0000satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to \u0000algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of \u0000artificial block artifacts.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117321262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信