Electronic Letters on Computer Vision and Image Analysis最新文献

筛选
英文 中文
Cricket Video Highlight Generation Methods: A Review 板球视频亮点生成方法综述
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-09-13 DOI: 10.5565/rev/elcvia.1465
Hansa Shingrakhia, Hetal Patel
{"title":"Cricket Video Highlight Generation Methods: A Review","authors":"Hansa Shingrakhia, Hetal Patel","doi":"10.5565/rev/elcvia.1465","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1465","url":null,"abstract":"The key events extraction from a video for the bestrepresentation of its contents is known as video summarization.In this study, the game of cricket is specifically consideredfor extracting important events such as boundaries, sixes andwickets. The cricket video highlight generation frameworksrequire extensive key event identification. These key events canbe identified by extracting the audio, visual and textual featuresfrom any cricket video.The prediction accuracy of the cricketvideo summarization mainly depends on the game rules, player’sform, their skill, and different natural conditions. This paperprovides a complete survey of latest research in cricket videosummarization methods. It includes the quantitative evaluationof the outcomes of the existing frameworks. This extensive reviewhighly recommended developing deep learning-assisted videosummarization approaches for cricket video due to their morerepresentative feature extraction and classification capabilitythan the conventional edge, texture features, and classifiers. Thescope of this analysis also includes future visions and researchopportunities in cricket highlight generation.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45956300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning-based Lung Cancer Classification of CT Images using Augmented Convolutional Neural Networks 基于深度学习的增强卷积神经网络肺癌CT图像分类
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-09-06 DOI: 10.5565/rev/elcvia.1490
Bushara Ar
{"title":"Deep Learning-based Lung Cancer Classification of CT Images using Augmented Convolutional Neural Networks","authors":"Bushara Ar","doi":"10.5565/rev/elcvia.1490","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1490","url":null,"abstract":"Lung cancer is worldwide the second death cancer, both in prevalence and lethality, both for women and men. The applicability of machine learning and pattern classification in lung cancer detection and classification is proposed. Pattern classification algorithms can classify the input data into different classes underlying the characteristic features in the input. Early identification of lung cancer using pattern recognition can save lives by analyzing the significant number of Computed Tomography images. Convolutional Neural Networks recently achieved remarkable results in various applications including Lung cancer detection in Deep Learning. The deployment of augmentation to improve the accuracy of a Convolutional Neural Network has been proposed. Data augmentation is utilized to find suitable training samples from existing training sets by employing various transformations such as scaling, rotation, and contrast modification. The LIDC-IDRI database is utilized to assess the networks. The proposed work showed an overall accuracy of 95%. Precision, recall, and F1 score for benign test data are 0.93, 0.96, and 0.95, respectively, and 0.96, 0.93, and 0.95 for malignant test data. The proposed system has impressive results when compared to other state-of-the-art approaches.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48404623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
multiple secret image embedding in dynamic ROI keypoints based on hybrid Speeded Up Scale Invariant Robust Features (h-SUSIRF) algorithm 基于混合加速尺度不变鲁棒特征(h-SUSIRF)算法的动态ROI关键点多秘密图像嵌入
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-07-19 DOI: 10.5565/rev/elcvia.1470
Suganthi Kumar, Rajkumar Soundrapandiyan
{"title":"multiple secret image embedding in dynamic ROI keypoints based on hybrid Speeded Up Scale Invariant Robust Features (h-SUSIRF) algorithm","authors":"Suganthi Kumar, Rajkumar Soundrapandiyan","doi":"10.5565/rev/elcvia.1470","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1470","url":null,"abstract":"This paper presents a robust and high-capacity video steganography framework using a hybrid Speeded Up Scale Invariant Robust Features (h-SUSIRF) keypoints detection algorithm. There are two main objectives in this method: (1) determining the dynamic Region of Interest (ROI) keypoints in video scenes and (2) embedding the appropriate secret data into the identified regions. In this work, the h-SUSIRF keypoints detection scheme is proposed to find keypoints within the scenes. These identified keypoints are dilated to form the dynamic ROI keypoints. Finally, the secret images are embedded into the dynamic ROI keypoints’ locations of the scenes using the substitution method. The performance of the proposed method (PM) is evaluated using standard metrics Structural Similarity Index Measure (SSIM), Capacity (Cp), and Bit Error Rate (BER). The standard of the video is ensured by Video Quality Measure (VQM). To examine the efficacy of the PM some recent steganalysis schemes are applied to calculate the detection ratio and the Receiver Operating Characteristics (ROC) curve is analyzed. From the experimental analysis, it is deduced that the PM surpasses the contemporary methods by achieving significant results in terms of imperceptibility, capacity, robustness with lower computational complexity.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45176348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-based CNN-ConvLSTM for Handwritten Arabic Word Extraction 基于注意力的CNN-ConvLSTM手写阿拉伯语单词提取
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-06-28 DOI: 10.5565/rev/elcvia.1433
Takwa Ben Aïcha Gader, A. Echi
{"title":"Attention-based CNN-ConvLSTM for Handwritten Arabic Word Extraction","authors":"Takwa Ben Aïcha Gader, A. Echi","doi":"10.5565/rev/elcvia.1433","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1433","url":null,"abstract":"Word extraction is one of the most critical steps in handwritten recognition systems. It is challenging for many reasons, such as the variability of handwritten writing styles, touching and overlapping characters, skewness problems, diacritics, ascenders, and descenders' presence. In this work, we propose a deep-learning-based approach for handwritten Arabic word extraction. We used an Attention-based CNN-ConvLSTM (Convolutional Long Short-term Memory) followed by a CTC (Connectionist Temporal Classification) function. Firstly, the text-line input image's essential features are extracted using Attention-based Convolutional Neural Networks (CNN). The extracted features and the text line's transcription are then passed to a ConvLSTM to learn a mapping between them. Finally, we used a CTC to learn the alignment between text-line images and their transcription automatically. We tested the proposed model on a complex dataset known as KFUPM Handwritten Arabic Text (KHATT cite{khatt}). It consists of complex patterns of handwritten Arabic text-lines. The experimental results show an apparent efficiency of the used combination, where we ended up with an extraction success rate of 91.7%.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47691876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neural network with competitive layers for character recognition 具有竞争层的用于字符识别的神经网络
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-06-28 DOI: 10.5565/rev/elcvia.1392
A. Goltsev, V. Gritsenko
{"title":"A neural network with competitive layers for character recognition","authors":"A. Goltsev, V. Gritsenko","doi":"10.5565/rev/elcvia.1392","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1392","url":null,"abstract":"A structure and functioning mechanisms of a neural network with competitive layers are described. The network is intended to solve the character recognition task. The network consists of several competitive layers of neurons. Each layer is a neural network consisting of a number of neurons represented as a layer. The number of neural layers is equal to the number of recognized classes. All neural layers have one-to-one correspondence with one another and with the input raster. The neurons of every layer have mutual lateral learning connections, which weights are modified during the learning process. There is a competitive (inhibitory) relationship between all neural layers. This competitive interaction is realized by means of a “winner-take-all” (WTA) procedure which aim is to select the layer with the highest level of neural activity.Validation of the network has been done in experiments on recognition of handwritten digits of the MNIST database. The experiments have demonstrated that its error rate is few less than 2%, which is not a high result, but it is compensated by rather fast data processing and a very simple structure and functioning mechanisms. ","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43416894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature selection based on discriminative power under uncertainty for computer vision applications 基于不确定性判别能力的特征选择在计算机视觉应用中的应用
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-06-28 DOI: 10.5565/rev/elcvia.1361
Marwa Chakroun, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel
{"title":"Feature selection based on discriminative power under uncertainty for computer vision applications","authors":"Marwa Chakroun, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel","doi":"10.5565/rev/elcvia.1361","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1361","url":null,"abstract":"Feature selection is a prolific research field, which has been widely studied in the last decades and has been successfully applied to numerous computer vision systems. It mainly aims to reduce the dimensionality and thus the system complexity. Features have not the same importance within the different classes. Some of them perform for class representation while others perform for class separation. In this paper, a new feature selection method based on discriminative power is proposed to select the relevant features under an uncertain framework, where the uncertainty is expressed through a possibility distribution. In an uncertain context, our method shows its ability to select features that can represent and discriminate between classes.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46313370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Material Classification with a Transfer Learning based Deep Model on an imbalanced Dataset using an epochal Deming-Cycle-Methodology 使用划时代戴明循环方法在不平衡数据集上使用基于迁移学习的深度模型进行材料分类
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-06-14 DOI: 10.5565/rev/elcvia.1517
Marco Klaiber
{"title":"Material Classification with a Transfer Learning based Deep Model on an imbalanced Dataset using an epochal Deming-Cycle-Methodology","authors":"Marco Klaiber","doi":"10.5565/rev/elcvia.1517","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1517","url":null,"abstract":"This work demonstrates that a transfer learning-based deep learning model can perform unambiguous classification based on microscopic images of material surfaces with a high degree of accuracy. A transfer learning-enhanced deep learning model was successfully used in combination with an innovative approach for eliminating noisy data based on automatic selection using pixel sum values, which was refined over different epochs to develop and evaluate an effective model for classifying microscopy images. The deep learning model evaluated achieved 91.54% accuracy with the dataset used and set new standards with the method applied. In addition, care was taken to achieve a balance between accuracy and robustness with respect to the model. Based on this scientific report, a means of identifying microscopy images could evolve to support material identification, suggesting a potential application in the domain of materials science and engineering. ","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45819270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-trained CNNs as Feature-Extraction Modules for Image Captioning 预训练cnn作为图像字幕的特征提取模块
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-05-10 DOI: 10.5565/rev/elcvia.1436
Muhammad Abdelhadie Al-Malla, Assef Jafar, Nada Ghneim
{"title":"Pre-trained CNNs as Feature-Extraction Modules for Image Captioning","authors":"Muhammad Abdelhadie Al-Malla, Assef Jafar, Nada Ghneim","doi":"10.5565/rev/elcvia.1436","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1436","url":null,"abstract":"In this work, we present a thorough experimental study about feature extraction using Convolutional NeuralNetworks (CNNs) for the task of image captioning in the context of deep learning. We perform a set of 72experiments on 12 image classification CNNs pre-trained on the ImageNet [29] dataset. The features areextracted from the last layer after removing the fully connected layer and fed into the captioning model. We usea unified captioning model with a fixed vocabulary size across all the experiments to study the effect of changingthe CNN feature extractor on image captioning quality. The scores are calculated using the standard metrics inimage captioning. We find a strong relationship between the model structure and the image captioning datasetand prove that VGG models give the least quality for image captioning feature extraction among the testedCNNs. In the end, we recommend a set of pre-trained CNNs for each of the image captioning evaluation metricswe want to optimise, and show the connection between our results and previous works. To our knowledge, thiswork is the most comprehensive comparison between feature extractors for image captioning.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44699354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Detection and Statistical Analysis of Microscopy Image Sequences 显微镜图像序列的目标检测与统计分析
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-04-28 DOI: 10.5565/rev/elcvia.1482
J. Gambini, Sasha Hurovitz, D. Chan, Rodrigo Ramele
{"title":"Object Detection and Statistical Analysis of Microscopy Image Sequences","authors":"J. Gambini, Sasha Hurovitz, D. Chan, Rodrigo Ramele","doi":"10.5565/rev/elcvia.1482","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1482","url":null,"abstract":"Confocal microscope images are wide useful in medical diagnosis and research. The automatic interpretation of this type of images is very important but it is a challenging endeavor in image processing area, since these images are heavily contaminated with noise, have low contrast and low resolution. This work deals with the problem of analyzing the penetration velocity of a chemotherapy drug in an ocular tumor called retinoblastoma. The primary retinoblastoma cells cultures are exposed to topotecan drug and the penetration evolution is documented by producing sequences of microscopy images. It is possible to quantify the penetration rate of topotecan drug because it produces fluorescence emission by laser excitation which is captured by the camera.In order to estimate the topotecan penetration time in the whole retinoblastoma cell culture, a procedure based on an active contour detection algorithm, a neural network classifier and a statistical model and its validation, is proposed.This new inference model allows to estimate the penetration time. Results show that the penetration mean time strongly depends on tumorsphere size and on chemotherapeutic treatment that the patient has previously received.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42798030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of computer vision to egg detection on a production line in real time. 计算机视觉在生产线上鸡蛋实时检测中的应用。
Electronic Letters on Computer Vision and Image Analysis Pub Date : 2022-02-01 DOI: 10.5565/rev/elcvia.1390
Maciej Ulaszewski, R. Janowski, Andrzej Janowski
{"title":"Application of computer vision to egg detection on a production line in real time.","authors":"Maciej Ulaszewski, R. Janowski, Andrzej Janowski","doi":"10.5565/rev/elcvia.1390","DOIUrl":"https://doi.org/10.5565/rev/elcvia.1390","url":null,"abstract":"In this paper we investigate the application of computer vision to the problem of egg detection on a production line in real-time. For this purpose a dedicated software was designed and implemented that exploited the advantages of neural networks or template matching approaches. To verify the correctness of the developed software as well as to confirm its applicability to real life problems a number of carefully designed experiments have been carried out. These experiments let us reveal what approaches are best suited for supporting egg detection on a production line in real-time.","PeriodicalId":38711,"journal":{"name":"Electronic Letters on Computer Vision and Image Analysis","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43521977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信