Journal of Electronic Imaging最新文献

筛选
英文 中文
LGD-FCOS: driver distraction detection using improved FCOS based on local and global knowledge distillation LGD-FCOS:利用基于本地和全局知识提炼的改进型 FCOS 检测驾驶员分心
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043046
Kunbiao Li, Xiaohui Yang, Jing Wang, Feng Zhang, Tao Xu
{"title":"LGD-FCOS: driver distraction detection using improved FCOS based on local and global knowledge distillation","authors":"Kunbiao Li, Xiaohui Yang, Jing Wang, Feng Zhang, Tao Xu","doi":"10.1117/1.jei.33.4.043046","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043046","url":null,"abstract":"Ensuring safety on the road is crucial, and detecting driving distractions plays a vital role in achieving this goal. Accurate identification of distracted driving behaviors facilitates prompt intervention, thereby contributing to a reduction in accidents. We introduce an advanced fully convolutional one-stage (FCOS) object detection algorithm tailored for driving distraction detection that leverages the knowledge distillation framework. Our proposed methodology enhances the conventional FCOS algorithm through the integration of the selective kernel split-attention module. This module bolsters the performance of the backbone network, ResNet, leading to a substantial improvement in the accuracy of the FCOS target detection algorithm. In addition, we incorporate a knowledge distillation framework equipped with a novel local and global knowledge distillation loss function. This framework facilitates the student network to achieve accuracy levels comparable to that of the teacher network while maintaining a reduced parameter count. The outcomes of our approach are promising, achieving a remarkable accuracy of 92.25% with a compact model size of 31.85 million parameters. This advancement paves the way for more efficient and accurate distracted driving detection systems, ultimately contributing to enhanced road safety.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and robust object region segmentation with self-organized lattice Boltzmann based active contour method 基于主动轮廓法的自组织晶格玻尔兹曼快速鲁棒物体区域分割法
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043050
Fatema A. Albalooshi, Vijayan K. Asari
{"title":"Fast and robust object region segmentation with self-organized lattice Boltzmann based active contour method","authors":"Fatema A. Albalooshi, Vijayan K. Asari","doi":"10.1117/1.jei.33.4.043050","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043050","url":null,"abstract":"We propose an approach leveraging the power of self-organizing maps (SOMs) in conjunction with a multiscale local image fitting (LIF) level-set function to enhance the capabilities of the region-based active contour model (ACM). In addition, we employ the lattice Boltzmann method (LBM) to ensure efficient convergence during the segmentation process. The SOM learns the underlying patterns and structures of both the background region and the object of interest region in an image, allowing for more accurate and robust segmentation results. Our multiscale LIF level-set approach influences image-specific fitting criteria into the energy functional, considering the features extracted by the SOM. Finally, the LBM is utilized to solve the level set equation and evolve the contour, allowing for a faster contour evolution. To evaluate the effectiveness of our approach, we performed our experiments on the challenging Pascal Visual Object Classes Challenge 2012 dataset. This dataset consists of images containing objects with diverse characteristics, such as illumination variations, shadows, occlusions, scale changes, and cluttered backgrounds. Our experimental results highlight the efficiency and robustness of our proposed method in achieving accurate segmentation. In terms of accuracy, our approach outperforms state-of-the-art learning-based ACMs, reaching a precision value of up to 93%. Moreover, our approach also demonstrates improvements in terms of computation time, leading to a reduction in computational time of 76% compared with the state-of-the-art methods. By integrating SOMs and the LBM, we enhance the efficiency of the segmentation process. This enables us to achieve accurate segmentation within reasonable time frames, making our method practical for real-world applications. Furthermore, we conducted experiments on medical imagery and thermal imagery, which yielded precise results.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-temporal enhancement method based on dense connection structure for compressed video 基于密集连接结构的压缩视频时空增强方法
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043054
Hongyao Li, Xiaohai He, Xiaodong Bi, Shuhua Xiong, Honggang Chen
{"title":"Spatio-temporal enhancement method based on dense connection structure for compressed video","authors":"Hongyao Li, Xiaohai He, Xiaodong Bi, Shuhua Xiong, Honggang Chen","doi":"10.1117/1.jei.33.4.043054","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043054","url":null,"abstract":"Under limited bandwidth conditions, video transmission often employs lossy compression to reduce the data volume, inevitably introducing compression noise. Quality enhancement of compressed videos can effectively recover the information loss incurred during the compression process. Currently, multi-frame quality enhancement of compressed videos has shown performance advantages compared to single-frame methods, as it utilizes the temporal correlation of videos. Methods based on deformable convolutions obtain spatio-temporal fusion features for reconstruction through multi-frame alignment. However, due to the limited utilization of deep information and sensitivity to alignment accuracy, these methods yield suboptimal results, especially in scenarios with scene changes and intense motion. To overcome these limitations, we propose a dense network-based quality enhancement method to obtain more accurate spatio-temporal fusion features. Specifically, the deep spatial features are first extracted from the to-be-enhanced frames using dense connections, then combined with the aligned features obtained from deformable convolution through the convolution and attention mechanism to make the network more attentive to useful branches in an adaptive way, and finally, the enhanced frames are obtained through the quality enhancement module of the dense connection structure. The experimental results show that when the quantization parameter is 37, the proposed method can improve the average peak signal-to-noise ratio by 0.99 dB in the lowdelay_P configuration.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion 3D object tracking method based on region and point cloud registration 基于区域和点云注册的三维物体融合跟踪方法
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043048
Yixin Jin, Jiawei Zhang, Yinhua Liu, Wei Mo, Hua Chen
{"title":"Fusion 3D object tracking method based on region and point cloud registration","authors":"Yixin Jin, Jiawei Zhang, Yinhua Liu, Wei Mo, Hua Chen","doi":"10.1117/1.jei.33.4.043048","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043048","url":null,"abstract":"Tracking rigid objects in three-dimensional (3D) space and 6DoF pose estimating are essential tasks in the field of computer vision. In general, the region-based 3D tracking methods have emerged as the optimal solution for weakly textured objects tracking within intricate scenes in recent years. However, tracking robustness in situations such as partial occlusion and similarly colored backgrounds is relatively poor. To address this issue, an improved region-based tracking method is proposed for achieving accurate 3D object tracking in the presence of partial occlusion and similarly colored backgrounds. First, a regional cost function based on the correspondence line is adopted, and a step function is proposed to alleviate the misclassification of sampling points in scenes. Afterward, in order to reduce the influence of similarly colored background and partial occlusion on the tracking performance, a weight function that fuses color and distance information of the object contour is proposed. Finally, the transformation matrix of the inter-frame motion obtained by the above region-based tracking method is used to initialize the model point cloud, and an improved point cloud registration method is adopted to achieve accurate registration between the model point cloud and the object point cloud to further realize accurate object tracking. The experiments are conducted on the region-based object tracking (RBOT) dataset and the real scenes, respectively. The results demonstrate that the proposed method outperforms the state-of-the-art region-based 3D object tracking method. On the RBOT dataset, the average tracking success rate is improved by 0.5% across five image sequences. In addition, in real scenes with similarly colored backgrounds and partial occlusion, the average tracking accuracy is improved by 0.28 and 0.26 mm, respectively.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coded target recognition algorithm for vision measurement 用于视觉测量的编码目标识别算法
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043058
Peng Zhang, Qing Liu, Shengpeng Li, Fei Liu, Wenjing Liu
{"title":"Coded target recognition algorithm for vision measurement","authors":"Peng Zhang, Qing Liu, Shengpeng Li, Fei Liu, Wenjing Liu","doi":"10.1117/1.jei.33.4.043058","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043058","url":null,"abstract":"Circularly coded targets are widely used in 3D measurement, target tracking, augmented reality, and other fields as feature points to be measured. The traditional coded target recognition algorithm is easily affected by illumination changes and excessive shooting angles, and the recognition accuracy is significantly reduced. Therefore, a new coded target recognition algorithm is required to reduce the effects of illumination and angle on the recognition process. The influence of illumination on the recognition of coding targets was analyzed in depth, and the advantages and disadvantages of traditional algorithms are discussed. A new adaptive threshold image segmentation method was designed, which, in contrast to traditional algorithms, incorporates the feature information of coding targets in the determination of the image segmentation threshold. The experimental results show that this method significantly reduces the influence of illumination variations and cluttered backgrounds on image segmentation. Similarly, the influence of different angles on the recognition process of coding targets was studied. The coding target is decoded by radial sampling of the dense point network, which can effectively reduce the influence of angle on the recognition process and improve the recognition accuracy of coding targets and the robustness of the algorithm. In addition, further experiments verified that the proposed detection and recognition algorithm can better extract and identify with high positioning accuracy and decoding success rate. It can achieve accurate positioning even in complex environments and meet the needs of industrial measurements.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Settlement detection from satellite imagery using fully convolutional network 利用全卷积网络从卫星图像中探测沉降点
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043056
Tayaba Anjum, Ahsan Ali, Muhammad Tahir Naseem
{"title":"Settlement detection from satellite imagery using fully convolutional network","authors":"Tayaba Anjum, Ahsan Ali, Muhammad Tahir Naseem","doi":"10.1117/1.jei.33.4.043056","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043056","url":null,"abstract":"Geospatial information is essential for development planning, like in the context of land and resource management. Existing research mainly focuses on multi-spectral or panchromatic images with specific sensor details. Incorporating multi-sensory panchromatic images at different scales makes the segmentation problem challenging. In this work, we propose a pixel-based globally trained model with a deep learning network to improve the segmentation results over existing patch-based networks. The proposed model consists of the encoder-decoder mechanism for semantic segmentation. Convolution and pooling layers are used at the encoding phase and transposed convolution and convolution layers are used for the decoding phase. Experiments show about 98.95% correct detection rate and 0.07% false detection rate of our proposed methodology on benchmark images. We prove the effectiveness of the proposed methodology by doing comparisons with previous work.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepLab-Rail: semantic segmentation network for railway scenes based on encoder-decoder structure DeepLab-Rail:基于编码器-解码器结构的铁路场景语义分割网络
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043038
Qingsong Zeng, Linxuan Zhang, Yuan Wang, Xiaolong Luo, Yannan Chen
{"title":"DeepLab-Rail: semantic segmentation network for railway scenes based on encoder-decoder structure","authors":"Qingsong Zeng, Linxuan Zhang, Yuan Wang, Xiaolong Luo, Yannan Chen","doi":"10.1117/1.jei.33.4.043038","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043038","url":null,"abstract":"Understanding the perimeter objects and environment changes in railway scenes is crucial for ensuring the safety of train operation. Semantic segmentation is the basis of intelligent perception and scene understanding. Railway scene categories are complex and effective features are challenging to extract. This work proposes a semantic segmentation network DeepLab-Rail based on classic yet effective encoder-decoder structure. It contains a lightweight feature extraction backbone embedded with channel attention (CA) mechanism to keep computational complexity low. To enrich the receptive fields of convolutional modules, we design a parallel and cascade convolution module called compound-atrous spatial pyramid pooling and a combination of dilated convolution ratio is selected through experiments to obtain multi-scale features. To fully use the shallow features and the high-level features, efficient CA mechanism is introduced and also the mixed loss function is designed for the problem of unbalanced label categories of the dataset. Finally, the experimental results on the RailSem19 railway dataset show that the mean intersection over union reaches 65.52% and the PA reaches 88.48%. The segmentation performance of railway confusing facilities, such as signal lights and catenary pillars, has been significantly improved and surpasses other advanced methods to our best knowledge.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141968924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep inner-knuckle-print recognition using lightweight Siamese network 利用轻量级连体网络深度识别关节内侧指纹
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043034
Hongxia Wang, Hongwu Yuan
{"title":"Deep inner-knuckle-print recognition using lightweight Siamese network","authors":"Hongxia Wang, Hongwu Yuan","doi":"10.1117/1.jei.33.4.043034","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043034","url":null,"abstract":"Texture features and stability have attracted much attention in the field of biometric recognition. The inner-knuckle print is unique and not easy to forge, so it is widely used in personal identity authentication, criminal detection, and other fields. In recent years, the rapid development of deep learning technology has brought new opportunities for internal-knuckle recognition. We propose a deep inner-knuckle print recognition method named LSKNet network. By establishing a lightweight Siamese network model and combining it with a robust cost function, we can realize efficient and accurate recognition of the inner-knuckle print. Compared to traditional methods and other deep learning methods, the network has lower model complexity and computational resource requirements, which enables it to run under lower hardware configurations. In addition, this paper also uses all the knuckle prints of four fingers for concatenated fusion recognition. Experimental results demonstrate that this method has achieved satisfactory results in the task of internal-knuckle print recognition.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-tuned Siamese neural network–based multimodal vein biometric system with hybrid firefly–particle swarm optimization 基于暹罗神经网络的微调型多模态静脉生物识别系统与混合萤火虫-粒子群优化技术
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043035
Gurunathan Velliangiri, Sudhakar Radhakrishnan
{"title":"Fine-tuned Siamese neural network–based multimodal vein biometric system with hybrid firefly–particle swarm optimization","authors":"Gurunathan Velliangiri, Sudhakar Radhakrishnan","doi":"10.1117/1.jei.33.4.043035","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043035","url":null,"abstract":"Recent advancements in biometric recognition focus on vein pattern–based person authentication systems. We present a multimodal biometric system using dorsal and finger vein images. By combining Siamese neural networks (SNNs) with hybrid firefly–particle swarm optimization (FF-PSO), we optimize finger and dorsal vein identification and classification. Using FF-PSO to tune SNN parameters is an innovative hybrid optimization approach designed to address the complexities of vein pattern recognition. The proposed system is tested with two public databases: the SDUMLA-HMT finger vein dataset and the Dr. Badawi hand vein dataset. The efficacy of the method is assessed using performance measures such as recall, accuracy, precision, F1 score, false acceptance rate, false rejection rate, and equal error rate. The experimental findings demonstrate that the proposed system achieves an accuracy of 99.5% with the fine-tune SNN and FF-PSO techniques and preprocessing module. The proposed system is also compared with various existing state-of-the-art techniques.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint merging and pruning: adaptive selection of better token compression strategy 联合合并和剪枝:自适应选择更好的标记压缩策略
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043045
Wei Peng, Liancheng Zeng, Lizhuo Zhang, Yue Shen
{"title":"Joint merging and pruning: adaptive selection of better token compression strategy","authors":"Wei Peng, Liancheng Zeng, Lizhuo Zhang, Yue Shen","doi":"10.1117/1.jei.33.4.043045","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043045","url":null,"abstract":"Vision transformer (ViT) is widely used to handle artificial intelligence tasks, making significant advances in a variety of computer vision tasks. However, due to the secondary interaction between tokens, the ViT model is inefficient, which greatly limits the application of the ViT model in real scenarios. In recent years, people have noticed that not all tokens contribute equally to the final prediction of the model, so token compression methods have been proposed, which are mainly divided into token pruning and token merging. Yet, we believe that neither pruning only to reduce non-critical tokens nor merging to reduce similar tokens are optimal strategies for token compression. To overcome this challenge, this work proposes a token compression framework: joint merging and pruning (JMP), which adaptively selects a better token compression strategy based on the similarity between critical tokens and non-critical tokens in each sample. JMP effectively reduces computational complexity while maintaining model performance and does not require the introduction of additional trainable parameters, achieving a good balance between efficiency and performance. Taking DeiT-S as an example, JMP reduces floating point operations by 35% and increases throughput by more than 45% while only decreasing accuracy by 0.2% on ImageNet.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信