{"title":"基于深度融合Cnn的Web图像检索混合策略:一种新的数据融合技术","authors":"Dr. C. Suganthi","doi":"10.37896/pd91.4/91413","DOIUrl":null,"url":null,"abstract":"There is always a need for an efficient and effective Content Based Image Retrieval (CBIR) classification due to the constant development in the number of large-scale repository. During the last few years, there has been a tremendous increase in activity in using multimedia data including both scientific and commercial domains. As a result, it's necessary to organize, store, analyze, and present available facts that meet user needs. Visual components are used in CBIR to find a picture from a large image document based on the user's interest and instantaneously query sequence attributes. The term 'content' may relate to the image's low-level properties such as color, form, or material. The need for CBIR arises because most image retrieval algorithms rely solely on textual information, resulting in a lot of garbage in the outcomes. Furthermore, searching for photos in a large database using keywords might be costly, inefficient, and fail to convey the user's purpose to describe the picture. To address this, the proposed research suggests \"JustClick\": a unique data fusion approach based on the Deep Fusion Convolution Neural Network (DFCNN) method for enhanced extraction of features. With the notion of intent research, this approach hybridizes linguistic and visual commonalities to capture the user's purpose. Only one click on a query picture is required for the images returned by text-based searches to be re-ranked dependent on their linguistic and visual similarity to the image database. The suggested system's performance is proved by making comparisons to text-based and content-based systems. The suggested JustClick system provides an effective automatic retrieval of comparable photos with better extracting of the features, yielding encouraging results with retrieving effectiveness of 97.7% on average.","PeriodicalId":20006,"journal":{"name":"Periodico Di Mineralogia","volume":"341 1","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2022-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Fusion Cnn Based Hybridized Strategy for Image Retrieval in Web: A Novel Data Fusion Technique\",\"authors\":\"Dr. C. Suganthi\",\"doi\":\"10.37896/pd91.4/91413\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is always a need for an efficient and effective Content Based Image Retrieval (CBIR) classification due to the constant development in the number of large-scale repository. During the last few years, there has been a tremendous increase in activity in using multimedia data including both scientific and commercial domains. As a result, it's necessary to organize, store, analyze, and present available facts that meet user needs. Visual components are used in CBIR to find a picture from a large image document based on the user's interest and instantaneously query sequence attributes. The term 'content' may relate to the image's low-level properties such as color, form, or material. The need for CBIR arises because most image retrieval algorithms rely solely on textual information, resulting in a lot of garbage in the outcomes. Furthermore, searching for photos in a large database using keywords might be costly, inefficient, and fail to convey the user's purpose to describe the picture. To address this, the proposed research suggests \\\"JustClick\\\": a unique data fusion approach based on the Deep Fusion Convolution Neural Network (DFCNN) method for enhanced extraction of features. With the notion of intent research, this approach hybridizes linguistic and visual commonalities to capture the user's purpose. Only one click on a query picture is required for the images returned by text-based searches to be re-ranked dependent on their linguistic and visual similarity to the image database. The suggested system's performance is proved by making comparisons to text-based and content-based systems. The suggested JustClick system provides an effective automatic retrieval of comparable photos with better extracting of the features, yielding encouraging results with retrieving effectiveness of 97.7% on average.\",\"PeriodicalId\":20006,\"journal\":{\"name\":\"Periodico Di Mineralogia\",\"volume\":\"341 1\",\"pages\":\"\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2022-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Periodico Di Mineralogia\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://doi.org/10.37896/pd91.4/91413\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"GEOCHEMISTRY & GEOPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Periodico Di Mineralogia","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.37896/pd91.4/91413","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"GEOCHEMISTRY & GEOPHYSICS","Score":null,"Total":0}
Deep Fusion Cnn Based Hybridized Strategy for Image Retrieval in Web: A Novel Data Fusion Technique
There is always a need for an efficient and effective Content Based Image Retrieval (CBIR) classification due to the constant development in the number of large-scale repository. During the last few years, there has been a tremendous increase in activity in using multimedia data including both scientific and commercial domains. As a result, it's necessary to organize, store, analyze, and present available facts that meet user needs. Visual components are used in CBIR to find a picture from a large image document based on the user's interest and instantaneously query sequence attributes. The term 'content' may relate to the image's low-level properties such as color, form, or material. The need for CBIR arises because most image retrieval algorithms rely solely on textual information, resulting in a lot of garbage in the outcomes. Furthermore, searching for photos in a large database using keywords might be costly, inefficient, and fail to convey the user's purpose to describe the picture. To address this, the proposed research suggests "JustClick": a unique data fusion approach based on the Deep Fusion Convolution Neural Network (DFCNN) method for enhanced extraction of features. With the notion of intent research, this approach hybridizes linguistic and visual commonalities to capture the user's purpose. Only one click on a query picture is required for the images returned by text-based searches to be re-ranked dependent on their linguistic and visual similarity to the image database. The suggested system's performance is proved by making comparisons to text-based and content-based systems. The suggested JustClick system provides an effective automatic retrieval of comparable photos with better extracting of the features, yielding encouraging results with retrieving effectiveness of 97.7% on average.
期刊介绍:
Periodico di Mineralogia is an international peer-reviewed Open Access journal publishing Research Articles, Letters and Reviews in Mineralogy, Crystallography, Geochemistry, Ore Deposits, Petrology, Volcanology and applied topics on Environment, Archaeometry and Cultural Heritage. The journal aims at encouraging scientists to publish their experimental and theoretical results in as much detail as possible. Accordingly, there is no restriction on article length. Additional data may be hosted on the web sites as Supplementary Information. The journal does not have article submission and processing charges. Colour is free of charges both on line and printed and no Open Access fees are requested. Short publication time is assured.
Periodico di Mineralogia is property of Sapienza Università di Roma and is published, both online and printed, three times a year.