IET Computer Vision最新文献

筛选
英文 中文
Guest Editorial: Learning from limited annotations for computer vision tasks 客座编辑:从计算机视觉任务的有限注释中学习
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-16 DOI: 10.1049/cvi2.12229
Yazhou Yao, Wenguan Wang, Qiang Wu, Dongfang Liu, Jin Zheng
{"title":"Guest Editorial: Learning from limited annotations for computer vision tasks","authors":"Yazhou Yao,&nbsp;Wenguan Wang,&nbsp;Qiang Wu,&nbsp;Dongfang Liu,&nbsp;Jin Zheng","doi":"10.1049/cvi2.12229","DOIUrl":"https://doi.org/10.1049/cvi2.12229","url":null,"abstract":"<p>The past decade has witnessed remarkable achievements in computer vision, owing to the fast development of deep learning. With the advancement of computing power and deep learning algorithms, we can process and apply millions or even hundreds of millions of large-scale data to train robust and advanced deep learning models. In spite of the impressive success, current deep learning methods tend to rely on massive annotated training data and lack the capability of learning from limited exemplars.</p><p>However, constructing a million-scale annotated dataset like ImageNet is time-consuming, labour-intensive and even infeasible in many applications. In certain fields, very limited annotated examples can be gathered due to various reasons such as privacy or ethical issues. Consequently, one of the pressing challenges in computer vision is to develop approaches that are capable of learning from limited annotated data. The purpose of this Special Issue is to collect high-quality articles on learning from limited annotations for computer vision tasks (e.g. image classification, object detection, semantic segmentation, instance segmentation and many others), publish new ideas, theories, solutions and insights on this topic and showcase their applications.</p><p>In this Special Issue we received 29 papers, all of which underwent peer review. Of the 29 originally submitted papers, 9 have been accepted.</p><p>The nine accepted papers can be clustered into two main categories: theoretical and applications. The papers that fall into the first category are by Liu et al., Li et al. and He et al. The second category of papers offers a direct solution to various computer vision tasks. These papers are by Ma et al., Wu et al., Rao et al., Sun et al., Hou et al. and Gong et al. A brief presentation of each of the papers in this Special Issue follows.</p><p>All of the papers selected for this Special Issue show that the field of learning from limited annotations for computer vision tasks is steadily moving forward. The possibility of a weakly supervised learning paradigm will remain a source of inspiration for new techniques in the years to come.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"17 5","pages":"509-512"},"PeriodicalIF":1.7,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12229","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50151226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point completion by a Stack-Style Folding Network with multi-scaled graphical features 具有多尺度图形特征的堆叠式折叠网络的点补全
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-11 DOI: 10.1049/cvi2.12196
Yunbo Rao, Ping Xu, Shaoning Zeng, Jianping Gou
{"title":"Point completion by a Stack-Style Folding Network with multi-scaled graphical features","authors":"Yunbo Rao, Ping Xu, Shaoning Zeng, Jianping Gou","doi":"10.1049/cvi2.12196","DOIUrl":"https://doi.org/10.1049/cvi2.12196","url":null,"abstract":"Point cloud completion is prevalent due to the insufficient results from current point cloud acquisition equipments, where a large number of point data failed to represent a relatively complete shape. Existing point cloud completion algorithms, mostly encoder‐decoder structures with grids transform (also presented as folding operation), can hardly obtain a persuasive representation of input clouds due to the issue that their bottleneck‐shape result cannot tell a precise relationship between the global and local structures. For this reason, this article proposes a novel point cloud completion model based on a Stack‐Style Folding Network (SSFN). Firstly, to enhance the deep latent feature extraction, SSFN enhances the exploitation of shape feature extractor by integrating both low‐level point feature and high‐level graphical feature. Next, a precise presentation is obtained from a high dimensional semantic space to improve the reconstruction ability. Finally, a refining module is designed to make a more evenly distributed result. Experimental results shows that our SSFN produces the most promising results of multiple representative metrics with a smaller scale parameters than current models.","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"17 1","pages":"576-585"},"PeriodicalIF":1.7,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57700209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point completion by a Stack-Style Folding Network with multi-scaled graphical features 具有多比例图形特征的堆叠式折叠网络的点完成
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-11 DOI: 10.1049/cvi2.12196
Yunbo Rao, Ping Xu, Shaoning Zeng, Jianping Gou
{"title":"Point completion by a Stack-Style Folding Network with multi-scaled graphical features","authors":"Yunbo Rao,&nbsp;Ping Xu,&nbsp;Shaoning Zeng,&nbsp;Jianping Gou","doi":"10.1049/cvi2.12196","DOIUrl":"https://doi.org/10.1049/cvi2.12196","url":null,"abstract":"<p>Point cloud completion is prevalent due to the insufficient results from current point cloud acquisition equipments, where a large number of point data failed to represent a relatively complete shape. Existing point cloud completion algorithms, mostly encoder-decoder structures with grids transform (also presented as folding operation), can hardly obtain a persuasive representation of input clouds due to the issue that their bottleneck-shape result cannot tell a precise relationship between the global and local structures. For this reason, this article proposes a novel point cloud completion model based on a Stack-Style Folding Network (SSFN). Firstly, to enhance the deep latent feature extraction, SSFN enhances the exploitation of shape feature extractor by integrating both low-level point feature and high-level graphical feature. Next, a precise presentation is obtained from a high dimensional semantic space to improve the reconstruction ability. Finally, a refining module is designed to make a more evenly distributed result. Experimental results shows that our SSFN produces the most promising results of multiple representative metrics with a smaller scale parameters than current models.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"17 5","pages":"576-585"},"PeriodicalIF":1.7,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12196","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50128438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-rank preserving embedding regression for robust image feature extraction 用于稳健图像特征提取的低秩保持嵌入回归
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-08 DOI: 10.1049/cvi2.12228
Tao Zhang, Chen-Feng Long, Yang-Jun Deng, Wei-Ye Wang, Si-Qiao Tan, Heng-Chao Li
{"title":"Low-rank preserving embedding regression for robust image feature extraction","authors":"Tao Zhang,&nbsp;Chen-Feng Long,&nbsp;Yang-Jun Deng,&nbsp;Wei-Ye Wang,&nbsp;Si-Qiao Tan,&nbsp;Heng-Chao Li","doi":"10.1049/cvi2.12228","DOIUrl":"10.1049/cvi2.12228","url":null,"abstract":"<p>Although low-rank representation (LRR)-based subspace learning has been widely applied for feature extraction in computer vision, how to enhance the discriminability of the low-dimensional features extracted by LRR based subspace learning methods is still a problem that needs to be further investigated. Therefore, this paper proposes a novel low-rank preserving embedding regression (LRPER) method by integrating LRR, linear regression, and projection learning into a unified framework. In LRPER, LRR can reveal the underlying structure information to strengthen the robustness of projection learning. The robust metric <i>L</i><sub>2,1</sub>-norm is employed to measure the low-rank reconstruction error and regression loss for moulding the noise and occlusions. An embedding regression is proposed to make full use of the prior information for improving the discriminability of the learned projection. In addition, an alternative iteration algorithm is designed to optimise the proposed model, and the computational complexity of the optimisation algorithm is briefly analysed. The convergence of the optimisation algorithm is theoretically and numerically studied. At last, extensive experiments on four types of image datasets are carried out to demonstrate the effectiveness of LRPER, and the experimental results demonstrate that LRPER performs better than some state-of-the-art feature extraction methods.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 1","pages":"124-140"},"PeriodicalIF":1.7,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46800153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual privacy behaviour recognition for social robots based on an improved generative adversarial network 基于改进生成对抗网络的社交机器人视觉隐私行为识别
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-04 DOI: 10.1049/cvi2.12231
Guanci Yang, Jiacheng Lin, Zhidong Su, Yang Li
{"title":"Visual privacy behaviour recognition for social robots based on an improved generative adversarial network","authors":"Guanci Yang,&nbsp;Jiacheng Lin,&nbsp;Zhidong Su,&nbsp;Yang Li","doi":"10.1049/cvi2.12231","DOIUrl":"10.1049/cvi2.12231","url":null,"abstract":"<p>Although social robots equipped with visual devices may leak user information, countermeasures for ensuring privacy are not readily available, making visual privacy protection problematic. In this article, a semi-supervised learning algorithm is proposed for visual privacy behaviour recognition based on an improved generative adversarial network for social robots; it is called PBR-GAN. A 9-layer residual generator network enhances the data quality, and a 10-layer discriminator network strengthens the feature extraction. A tailored objective function, loss function, and strategy are proposed to dynamically adjust the learning rate to guarantee high performance. A social robot platform and architecture for visual privacy recognition and protection are implemented. The recognition accuracy of the proposed PBR-GAN is compared with Inception_v3, SS-GAN, and SF-GAN. The average recognition accuracy of the proposed PBR-GAN is 85.91%, which is improved by 3.93%, 9.91%, and 1.73% compared with the performance of Inception_v3, SS-GAN, and SF-GAN respectively. Through a case study, seven situations are considered related to privacy at home, and develop training and test datasets with 8,720 and 1,280 images, respectively, are developed. The proposed PBR-GAN recognises the designed visual privacy information with an average accuracy of 89.91%.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 1","pages":"110-123"},"PeriodicalIF":1.7,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12231","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47731526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determining the proper number of proposals for individual images 为单个图像确定适当数量的建议
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-03 DOI: 10.1049/cvi2.12230
Zihang He, Yong Li
{"title":"Determining the proper number of proposals for individual images","authors":"Zihang He,&nbsp;Yong Li","doi":"10.1049/cvi2.12230","DOIUrl":"10.1049/cvi2.12230","url":null,"abstract":"<p>The region proposal network is indispensable to two-stage object detection methods. It generates a fixed number of proposals that are to be classified and regressed by detection heads to produce detection boxes. However, the fixed number of proposals may be too large when an image contains only a few objects but too small when it contains much more objects. Considering this, the authors explored determining a proper number of proposals according to the number of objects in an image to reduce the computational cost while improving the detection accuracy. Since the number of ground truth objects is unknown at the inference stage, the authors designed a simple but effective module to predict the number of foreground regions, which will be substituted for the number of objects for determining the proposal number. Experimental results of various two-stage detection methods on different datasets, including MS-COCO, PASCAL VOC, and CrowdHuman showed that equipping the designed module increased the detection accuracy while decreasing the FLOPs of the detection head. For example, experimental results on the PASCAL VOC dataset showed that applying the designed module to Libra R-CNN and Grid R-CNN increased over 1.5 AP<sub>50</sub> while decreasing the FLOPs of detection heads from 28.6 G to nearly 9.0 G.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 1","pages":"141-149"},"PeriodicalIF":1.7,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43408162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot temporal event localisation: Label-free, training-free, domain-free 零时间事件定位:无标签、无训练、无领域
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-03 DOI: 10.1049/cvi2.12224
Li Sun, Ping Wang, Liuan Wang, Jun Sun, Takayuki Okatani
{"title":"Zero-shot temporal event localisation: Label-free, training-free, domain-free","authors":"Li Sun, Ping Wang, Liuan Wang, Jun Sun, Takayuki Okatani","doi":"10.1049/cvi2.12224","DOIUrl":"https://doi.org/10.1049/cvi2.12224","url":null,"abstract":"Temporal event localisation (TEL) has recently attracted increasing attention due to the rapid development of video platforms. Existing methods are based on either fully/weakly supervised or unsupervised learning, and thus they rely on expensive data annotation and time‐consuming training. Moreover, these models, which are trained on specific domain data, limit the model generalisation to data distribution shifts. To cope with these difficulties, the authors propose a zero‐shot TEL method that can operate without training data or annotations. Leveraging large‐scale vision and language pre‐trained models, for example, CLIP, we solve the two key problems: (1) how to find the relevant region where the event is likely to occur; (2) how to determine event duration after we find the relevant region. Query guided optimisation for local frame relevance relying on the query‐to‐frame relationship is proposed to find the most relevant frame region where the event is most likely to occur. Proposal generation method relying on the frame‐to‐frame relationship is proposed to determine the event duration. The authors also propose a greedy event sampling strategy to predict multiple durations with high reliability for the given event. The authors’ methodology is unique, offering a label‐free, training‐free, and domain‐free approach. It enables the application of TEL purely at the testing stage. The practical results show it achieves competitive performance on the standard Charades‐STA and ActivityCaptions datasets.","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"17 1","pages":"599-613"},"PeriodicalIF":1.7,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57700647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot temporal event localisation: Label-free, training-free, domain-free 零样本时间事件本地化:无标签、无训练、无域
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-08-03 DOI: 10.1049/cvi2.12224
Li Sun, Ping Wang, Liuan Wang, Jun Sun, Takayuki Okatani
{"title":"Zero-shot temporal event localisation: Label-free, training-free, domain-free","authors":"Li Sun,&nbsp;Ping Wang,&nbsp;Liuan Wang,&nbsp;Jun Sun,&nbsp;Takayuki Okatani","doi":"10.1049/cvi2.12224","DOIUrl":"https://doi.org/10.1049/cvi2.12224","url":null,"abstract":"<p>Temporal event localisation (TEL) has recently attracted increasing attention due to the rapid development of video platforms. Existing methods are based on either fully/weakly supervised or unsupervised learning, and thus they rely on expensive data annotation and time-consuming training. Moreover, these models, which are trained on specific domain data, limit the model generalisation to data distribution shifts. To cope with these difficulties, the authors propose a zero-shot TEL method that can operate without training data or annotations. Leveraging large-scale vision and language pre-trained models, for example, CLIP, we solve the two key problems: (1) how to find the relevant region where the event is likely to occur; (2) how to determine event duration after we find the relevant region. Query guided optimisation for local frame relevance relying on the query-to-frame relationship is proposed to find the most relevant frame region where the event is most likely to occur. Proposal generation method relying on the frame-to-frame relationship is proposed to determine the event duration. The authors also propose a greedy event sampling strategy to predict multiple durations with high reliability for the given event. The authors’ methodology is unique, offering a label-free, training-free, and domain-free approach. It enables the application of TEL purely at the testing stage. The practical results show it achieves competitive performance on the standard Charades-STA and ActivityCaptions datasets.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"17 5","pages":"599-613"},"PeriodicalIF":1.7,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50123617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving object detection by enhancing the effect of localisation quality evaluation on detection confidence 通过增强定位质量评价对检测置信度的影响来改进目标检测
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-07-31 DOI: 10.1049/cvi2.12227
Zuyi Wang, Wei Zhao, Li Xu
{"title":"Improving object detection by enhancing the effect of localisation quality evaluation on detection confidence","authors":"Zuyi Wang,&nbsp;Wei Zhao,&nbsp;Li Xu","doi":"10.1049/cvi2.12227","DOIUrl":"10.1049/cvi2.12227","url":null,"abstract":"<p>The one-stage object detector has been widely applied in many computer vision applications due to its high detection efficiency and simple framework. However, one-stage detectors heavily rely on Non-maximum Suppression to remove the duplicated predictions for the same objects, and the detectors produce detection confidence to measure the quality of those predictions. The localisation quality is an important factor to evaluate the predicted bounding boxes, but its role has not been fully utilised in previous works. To alleviate the problem, the Quality Prediction Block (QPB), a lightweight sub-network, is designed by the authors, which strengthens the effect of localisation quality evaluation on detection confidence by leveraging the features of predicted bounding boxes. The QPB is simple in structure and applies to different forms of detection confidence. Extensive experiments are conducted on the public benchmarks, MS COCO, PASCAL VOC and Berkeley DeepDrive. The results demonstrate the effectiveness of our method in the detectors with various forms of detection confidence. The proposed approach also achieves better performance in the stronger one-stage detectors.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 1","pages":"97-109"},"PeriodicalIF":1.7,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12227","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44989905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lite-weight semantic segmentation with AG self-attention 具有AG自注意的Lite权重语义分割
IF 1.7 4区 计算机科学
IET Computer Vision Pub Date : 2023-07-28 DOI: 10.1049/cvi2.12225
Bing Liu, Yansheng Gao, Hai Li, Zhaohao Zhong, Hongwei Zhao
{"title":"Lite-weight semantic segmentation with AG self-attention","authors":"Bing Liu,&nbsp;Yansheng Gao,&nbsp;Hai Li,&nbsp;Zhaohao Zhong,&nbsp;Hongwei Zhao","doi":"10.1049/cvi2.12225","DOIUrl":"10.1049/cvi2.12225","url":null,"abstract":"<p>Due to the large computational and GPUs memory cost of semantic segmentation, some works focus on designing a lite weight model to achieve a good trade-off between computational cost and accuracy. A common method is to combined CNN and vision transformer. However, these methods ignore the contextual information of multi receptive fields. And existing methods often fail to inject detailed information losses in the downsampling of multi-scale feature. To fix these issues, we propose AG Self-Attention, which is Enhanced Atrous Self-Attention (EASA), and Gate Attention. AG Self-Attention adds the contextual information of multi receptive fields into the global semantic feature. Specifically, the Enhanced Atrous Self-Attention uses weight shared atrous convolution with different atrous rates to get the contextual information under the specific different receptive fields. Gate Attention introduces gating mechanism to inject detailed information into the global semantic feature and filter detailed information by producing “fusion” gate and “update” gate. In order to prove our insight. We conduct numerous experiments in common semantic segmentation datasets, consisting of ADE20 K, COCO-stuff, PASCAL Context, Cityscapes, to show that our method achieves state-of-the-art performance and achieve a good trade-off between computational cost and accuracy.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 1","pages":"72-83"},"PeriodicalIF":1.7,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45119461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信