International Journal of Image and Graphics最新文献

筛选
英文 中文
Remote Sensing Pansharpening with TV-H−1 Decomposition and PSO-Based Adaptive Weighting Method 基于TV-H−1分解和PSO的自适应加权方法的遥感全景锐化
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-25 DOI: 10.1142/s021946782450061x
Dharaj. Sangani, R. Thakker, S. Panchal, Rajesh Gogineni
{"title":"Remote Sensing Pansharpening with TV-H−1 Decomposition and PSO-Based Adaptive Weighting Method","authors":"Dharaj. Sangani, R. Thakker, S. Panchal, Rajesh Gogineni","doi":"10.1142/s021946782450061x","DOIUrl":"https://doi.org/10.1142/s021946782450061x","url":null,"abstract":"In remote sensing, owing to existing sensors’ limitations and the tradeoff between signal-to-noise ratio (SNR) and instantaneous field of view (IFOV), it is difficult to obtain a single image with good spectral and spatial resolution. Pansharpening (PS) is the technique for sharpening multispectral (MS) images by extracting structural and edge information of panchromatic (PAN) image. Multiscale decomposition methods are used for decomposing image in sub-bands but are affected by ringing artifacts, therefore the resultant image seems to be blurred and misregistered. The proposed method overcomes this drawback by decomposing PAN and four band MS image into cartoon and texture components with total variation (TV) Hilbert[Formula: see text] model. The particle swarm optimization (PSO) algorithm is used for finding the optimum weight for fusing texture and cartoon details of PAN and MS images. The proposed method is practically validated on both full-scale and reduced-scale. Robustness of our proposed approach is tested on different geographical areas such as hilly, urban, and vegetation areas. From the visual analysis and qualitative parameters, the proposed method is proved effective compared with other traditional approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43317066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Breast Mass Lesion Detection in Mammogram Image 乳房x光图像中肿块病灶的自动检测
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s0219467824500566
R. Bania, A. Halder
{"title":"Automatic Breast Mass Lesion Detection in Mammogram Image","authors":"R. Bania, A. Halder","doi":"10.1142/s0219467824500566","DOIUrl":"https://doi.org/10.1142/s0219467824500566","url":null,"abstract":"Mammography imaging is one of the most successful techniques for breast cancer screening and detecting breast lesions. Detection of the Region of Interest (ROI) (where the possible abnormalities could be present) is the backbone for the success of any Computer-Aided Detection or Diagnosis (CADx) system. In this paper, to assist the CADx system, one computational model is proposed to detect breast mass lesions from mammogram images. At the beginning of the process, pectoral muscles from the mammograms are removed as a pre-processing step. Then by applying an automatic thresholding scheme with the required image processing techniques, different regions of breast tissues are ranked to detect the possible suspected region to refine the further segmentation task. One seeded region growing approach is proposed with an automatic seed selection criterion to detect the suspected region to segment the ROI. The proposed model has very less user intervention as maximum of the parameters are computed automatically. To evaluate the performance of the proposed model, it is compared with four different methods with six different evaluation metrics viz., Jaccard & Dice co-efficient, relative error, segmentation accuracy, error and Fowlkes–Mallows index (FMI). On the proposed model, 57 mammogram images are tested, consisting of four different cases that are collected from the publicly available benchmark database. The qualitative and quantitative analyses are performed to evaluate the proposed model. The best dice co-efficient, Jaccard co-efficient, accuracy, error and FMI values observed are 0.9506, 0.9471, 95.62%, 4.38% and 0.932, respectively. The superiority of the model over six state-of-the-art compared methods is well evident from the experimental results.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43630262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine_Denseiganet: Automatic Medical Image Classification in Chest CT Scan Using Hybrid Deep Learning Framework Fine_Denseiganet:基于混合深度学习框架的胸部CT图像自动分类
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s0219467825500044
Hemlata Sahu, R. Kashyap
{"title":"Fine_Denseiganet: Automatic Medical Image Classification in Chest CT Scan Using Hybrid Deep Learning Framework","authors":"Hemlata Sahu, R. Kashyap","doi":"10.1142/s0219467825500044","DOIUrl":"https://doi.org/10.1142/s0219467825500044","url":null,"abstract":"Medical image classification is one of the most significant tasks in computer-aided diagnosis. In the era of modern healthcare, the progress of digitalized medical images has led to a crucial role in analyzing medical image analysis. Recently, accurate disease recognition from medical Computed Tomography (CT) images remains a challenging scenario which is important in rendering effective treatment to patients. The infectious COVID-19 disease is highly contagious and leads to a rapid increase in infected individuals. Some drawbacks noticed with RT-PCR kits are high false negative rate (FNR) and a shortage in the number of test kits. Hence, a Chest CT scan is introduced instead of RT-PCR which plays an important role in diagnosing and screening COVID-19 infections. However, manual examination of CT scans performed by radiologists can be time-consuming, and a manual review of each individual CT image may not be feasible in emergencies. Therefore, there is a need to perform automated COVID-19 detection with the advances in AI-based models. This work presents effective and automatic Deep Learning (DL)-based COVID-19 detection using Chest CT images. Initially, the data is gathered and pre-processed through Spatial Weighted Bilateral Filter (SWBF) to eradicate unwanted distortions. The extraction of deep features is processed using Fine_Dense Convolutional Network (Fine_DenseNet). For classification, the Softmax layer of Fine_DenseNet is replaced using Improved Generative Adversarial Network_Artificial Hummingbird (IGAN_AHb) model in order to train the data on the labeled and unlabeled dataset. The loss in the network model is optimized using Artificial Hummingbird (AHb) optimization algorithm. Here, the proposed DL model (Fine_DenseIGANet) is used to perform automated multi-class classification of COVID-19 using CT scan images and attained a superior classification accuracy of 95.73% over other DL models.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42842277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Aspect-Based Sentiment Analysis Using Fabricius Ringlet-Based Hybrid Deep Learning for Online Reviews 使用基于Fabricius Ringlet的混合深度学习进行基于方面的情绪分析用于在线评论
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s0219467825500056
Santoshi Kumari, T. P. Pushphavathi
{"title":"Aspect-Based Sentiment Analysis Using Fabricius Ringlet-Based Hybrid Deep Learning for Online Reviews","authors":"Santoshi Kumari, T. P. Pushphavathi","doi":"10.1142/s0219467825500056","DOIUrl":"https://doi.org/10.1142/s0219467825500056","url":null,"abstract":"The sentiment analysis relying on the aspect of online reviews is utilized for identifying the polarity of the given review. Nowadays, many methods are introduced for aspect-based sentiment analysis (ABSA) using neural networks, and many methods failed to consider contextual information exploitation to make the performance more accurate. Hence, this research proposed an optimized deep learning method for the detection of the aspect and to identify the polarity. Hence, in this research, an optimized deep learning technique for the ABSA is introduced by considering the online reviews, in which the deep learning classifiers are trained with the proposed Fabricius ringlet optimization (FRO) algorithm to reduce the loss that helps to enhance the accuracy of sentiment polarity prediction. The proposed FRO is developed by the hybridization of the behavioral nature of the Fabricius and the ringlet in feeding for the determination of the global best solution. The tuning of the weights and biases of the classifier enhance the performance of the classifier. The objective behind the tuning is to minimize the loss function while training and to enhance the accuracy of aspect extraction and polarity prediction of sentiment. Based on a study of the existing approach, the suggested FRO-based hybrid deep learning method is significantly improved; its accuracy, sensitivity, and specificity are 87.06%, 90.83%, and 79.37%, respectively, with a training percentage of 40%. The accuracy, sensitivity, and specificity of the existing technique have also been enhanced for aspect restaurant values, which are 87.53%, 96.06%, and 79.88% with a 60% training percentage. Similar to that, Twitter values for accuracy, sensitivity, and specificity are reported to be 89.08%, 99.35%, and 79.70%, respectively, with an 80% training percentage. The proposed method obtained the 90.13%, 99.35%, and 81.10% accuracy, sensitivity, and specificity from the assessment of the FRO-based hybrid deep learning.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43349570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combined Tri-Classifiers for IoT Botnet Detection with Tuned Training Weights 调整训练权值的物联网僵尸网络检测组合三分类器
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s021946782550007x
Abhilash Kayyidavazhiyil
{"title":"Combined Tri-Classifiers for IoT Botnet Detection with Tuned Training Weights","authors":"Abhilash Kayyidavazhiyil","doi":"10.1142/s021946782550007x","DOIUrl":"https://doi.org/10.1142/s021946782550007x","url":null,"abstract":"Although IoT sectors seem more popular and pervasively, they struggle with hazards. The botnet is one of the largest security dangers associated with IoT. It enables malicious software to administer and attack private network equipment collectively without the owners’ knowledge. Although many studies have used ML to detect botnets, these are either not very effective or only work with specific types of botnets or devices. As a result, the detection model for deep learning ideas is the focus of this research. It entails three key processes: (a) preprocessing, (b) feature extraction, and (c) classification. The input data are initially preprocessed using an improved data normalization approach. The preprocessed data are used to extract a number of features, including Tanimoto coefficient features, improved differential holoentropy-based features, Pearson r correlation-based features, and others. The detection process will be completed by an ensemble classification model that randomly shuffles models like the Deep Belief Network (DBN) model, Bidirectional Gated Recurrent Unit (Bi-GRU), and Long Short-Term Memory (LSTM). Bi-GRU, DBN, and LSTM will be averaged to provide the ensemble results. Bi-GRU is trained using the Self Improved Blue Monkey Optimization (SIBMO) Algorithm by selecting the optimal weights, which increases the detection accuracy. The overall performance of the suggested work is then evaluated in relation to other existing models using various methodologies. In comparison to existing methods, the created ensemble classifier [Formula: see text] SIBMO scheme obtains the highest accuracy (93%) at a learning percentage of 90%.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45568914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of Speech Enhancement Based on Deep Learning Techniques 基于深度学习技术的语音增强综述
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s0219467825500019
Chaitanya Jannu, S. Vanambathina
{"title":"An Overview of Speech Enhancement Based on Deep Learning Techniques","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467825500019","DOIUrl":"https://doi.org/10.1142/s0219467825500019","url":null,"abstract":"Recent years have seen a significant amount of studies in the area of speech enhancement. This review looks at several speech improvement methods as well as Deep Neural Network (DNN) functions in speech enhancement. Speech transmissions are frequently distorted by ambient noise, background noise, and reverberations. There are processing methods, such as Short-time Fourier Transform, Short-time Autocorrelation, and Short-time Energy (STE), that can be used to enhance speech. To reduce speech noise, features such as the Mel-Frequency Cepstral Coefficients (MFCCs), Logarithmic Power Spectrum (LPS), and Gammatone Frequency Cepstral Coefficients (GFCCs) can be retrieved and input to a DNN. DNN is essential to speech improvement since it builds models using a lot of training data and evaluates the efficacy of the enhanced speech using certain performance metrics. Since the beginning of deep learning publications in 1993, a variety of speech enhancement methods have been examined in this study. This review provides a thorough examination of the several neural network topologies, training algorithms, activation functions, training targets, acoustic features, and databases that were employed for the job of speech enhancement and were gathered from various articles published between 1993 and 2022.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46800367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Processing-Based Method of Evaluation of Stress from Grain Structures of Through Silicon Via (TSV) 基于图像处理的硅通孔(TSV)晶粒结构应力评估方法
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-22 DOI: 10.1142/s0219467825500081
Mamvinder Sharma, Sudhakara Reddy Saripalli, A. Gupta, Pankaj Palta, D. Pandey
{"title":"Image Processing-Based Method of Evaluation of Stress from Grain Structures of Through Silicon Via (TSV)","authors":"Mamvinder Sharma, Sudhakara Reddy Saripalli, A. Gupta, Pankaj Palta, D. Pandey","doi":"10.1142/s0219467825500081","DOIUrl":"https://doi.org/10.1142/s0219467825500081","url":null,"abstract":"Visualization of material composition across numerous grains and complicated networks of grain boundaries using image processing techniques can reveal fresh insights into the material’s structural evolution and upcoming functional capabilities for a variety of applications. Three-dimensional integrated circuits (3D IC) are the most practical technology for increasing transistor density in future semiconductor applications. One of the key benefits of 3D IC is heterogeneous integration, which results in shorter interconnections due to vertical stacking. However, one of the most significant challenges in building higher-density microelectronics devices is the stress generated by material mismatches in the coefficient of thermal expansion (CTE). The purpose of this study is to analyze grain boundary migration caused by variations in strain energy density using image processing methods for 3D grain continuum modeling. Temperature changes in polycrystalline structures generate stresses and strain energy densities, which may be calculated using FEM software. Single crystal Cu’s anisotropic elastic properties are twisted to suit grain orientation in space and each grain is treated as a single crystal. Grain boundary speeds are calculated using a simple model that relates grain boundary mobility to variations in strain energy density on both sides of grain boundaries. Using the grain continuum model, researchers will be able to investigate the effect of thermally generated stresses on grain boundary motion caused by atomic flux driven by strain energy. Using finite-element modeling of the grain structure in a Through Silicon Via, the stress effect on grain boundaries caused by grain rotation due to CTE mismatch was investigated (TSV). The structure must be modeled using a scanning electron microscopes Electron Backscatter Diffraction (EBSD) image (SEM). Grain growth and subsequent grain boundary rotation can be performed using the appropriate extrapolation method to measure their influence on stress and, as a result, the TSV’s overall reliability.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Enrichment of Brightness-Distorted Chest X-Ray Images Using Fusion-Based Contrast-Limited Adaptive Fuzzy Gamma Algorithm 基于融合的对比度受限自适应模糊Gamma算法对亮度失真胸部X射线图像的新富集
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s021946782450058x
K. Kiruthika, Rashmita Khilar
{"title":"Novel Enrichment of Brightness-Distorted Chest X-Ray Images Using Fusion-Based Contrast-Limited Adaptive Fuzzy Gamma Algorithm","authors":"K. Kiruthika, Rashmita Khilar","doi":"10.1142/s021946782450058x","DOIUrl":"https://doi.org/10.1142/s021946782450058x","url":null,"abstract":"As innovations for image handling, image enrichment (IE) can give more effective information and image compression can decrease memory space. IE plays a vital role in the medical field for which we have to use a noiseless image. IE applies to all areas of understanding and analysis of images. This paper provides an innovative algorithm called contrast-limited adaptive fuzzy gamma (CLAFG) for IE using chest X-ray (CXR) images. The image dissimilarity is enriched by computing several histograms and membership planes. The proposed algorithm comprises various steps. Firstly, CXR is separated into contextual region (CR). Secondly, the cliplimit, a threshold value which alters the dissimilarity of the CXR and applies it to the histogram which, is generated by CR and then applies the fuzzification technique via the membership plane to the CXR. Thirdly, the clipped histograms are performed in two ways, i.e. it is merged using bi-cubic interpolation techniques and it is modified with membership function. Finally, the resulting output from bi-cubic interpolation and membership function are fond of using upgrade contemplate standard methods for a richer CXR image.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43869871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Convolutional Neural Network based on UNet for Iris Segmentation 基于UNet的鲁棒卷积神经网络用于虹膜分割
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500426
A. Khaki
{"title":"Robust Convolutional Neural Network based on UNet for Iris Segmentation","authors":"A. Khaki","doi":"10.1142/s0219467824500426","DOIUrl":"https://doi.org/10.1142/s0219467824500426","url":null,"abstract":"Nowadays, the iris recognition system is one of the most widely used and most accurate biometric systems. The iris segmentation is the most crucial stage of iris recognition system. The accurate iris segmentation can improve the efficiency of iris recognition. The main objective of iris segmentation is to obtain the iris area. Recently, the iris segmentation methods based on convolutional neural networks (CNNs) have been grown, and they have improved the accuracy greatly. Nevertheless, their accuracy is decreased by low-quality images captured in uncontrolled conditions. Therefore, the existing methods cannot segment low-quality images precisely. To overcome the challenge, this paper proposes a robust convolutional neural network (R-Net) inspired by UNet for iris segmentation. R-Net is divided into two parts: encoder and decoder. In this network, several layers are added to ResNet-34, and used in the encoder path. In the decoder path, four convolutions are applied at each level. Both help to obtain suitable feature maps and increase the network accuracy. The proposed network has been tested on four datasets: UBIRIS v2 (UBIRIS), CASIA iris v4.0 (CASIA) distance, CASIA interval, and IIT Delhi v1.0 (IITD). UBIRIS is a dataset that is used for low-quality images. The error rate (NICE1) of proposed network is 0.0055 on UBIRIS, 0.0105 on CASIA interval, 0.0043 on CASIA distance, and 0.0154 on IITD. Results show better performance of the proposed network compared to other methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45410552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques 利用深度学习技术学习时空特征识别瑜伽姿势
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500554
J. Palanimeera, K. Ponmozhi
{"title":"Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques","authors":"J. Palanimeera, K. Ponmozhi","doi":"10.1142/s0219467824500554","DOIUrl":"https://doi.org/10.1142/s0219467824500554","url":null,"abstract":"Yoga posture recognition remains a difficult issue because of crowded backgrounds, varied settings, occlusions, viewpoint alterations, and camera motions, despite recent promising advances in deep learning. In this paper, the method for accurately detecting various yoga poses using DL (Deep Learning) algorithms is provided. Using a standard RGB camera, six yoga poses — Sukhasana, Kakasana, Naukasana, Dhanurasana, Tadasana, and Vrikshasana — were captured on ten people, five men and five women. In this study, a brand-new DL model is presented for representing the spatio-temporal (ST) variation of skeleton-based yoga poses in movies. It is advised to use a variety of representation learners to pry video-level temporal recordings, which combine spatio-temporal sampling with long-range time mastering to produce a successful and effective training approach. A novel feature extraction method using Open Pose is described, together with a DenceBi-directional LSTM network to represent spatial-temporal links in both the forward and backward directions. This will increase the efficacy and consistency of modeling long-range action detection. To improve temporal pattern modeling capability, they are stacked and combined with dense skip connections. To improve performance, two modalities from look and motion are fused with a fusion module and compared to other deep learning models are LSTMs including LSTM, Bi-LSTM, Res-LSTM, and Res-BiLSTM. Studies on real-time datasets of yoga poses show that the suggested DenseBi-LSTM model performs better and yields better results than state-of-the-art techniques for yoga pose detection.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49029323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信