{"title":"Search Engine for Assorted Media in Chat Applications","authors":"Aditya Pandey, Ishita Jaiswal, S. Pandey","doi":"10.1109/AIC55036.2022.9848901","DOIUrl":null,"url":null,"abstract":"The mobile industry has come across many revolutionizing advancements in its technologies over the past three decades, making mobile phones an integral part of everyone’s daily lives. With the exponential advent of this technology to handle work on chat applications for prolonged hours, there has been a great increase in the interconnectivity of different sections of society, both economically and demographically. Existing chat applications provide in-built search engines that are competent in handling text searches but cannot search for different types of media, both visual and audible, which may be present in the chat. This paper proposes a novel approach that allows chat applications to use an inbuilt media search engine that performs searches for all the disparate media that the chat holds, using keywords. The machine learning model detects the objects from the media files and maps those objects’ keywords to the list of images. These keywords may be any of the objects that can be detected in those media files. Say, a user searches for the keyword ‘Table’ in the search engine, and he gets all the images having tables. This feature saves time for the user as no manual work is required to search for any media exchanged in the chat by scrolling and searching in case of many media files. This idea blooms out from within the feedback that the real-world audience has provided when asked for their expectations from a “perfect” chat application. The entire study associated with this paper conforms with the problem statement and guarantees the user a more comfortable and helpful experience while using the proposed feature. The proposed method uses TensorFlow-Lite and Google Machine Learning (ML) Kit’s Image Labelling APIs to detect the keywords that together characterize the media present in the chat. This method is found to be performing accurately for all types of media (especially photos) when manually tested with real-world data.","PeriodicalId":433590,"journal":{"name":"2022 IEEE World Conference on Applied Intelligence and Computing (AIC)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE World Conference on Applied Intelligence and Computing (AIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIC55036.2022.9848901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The mobile industry has come across many revolutionizing advancements in its technologies over the past three decades, making mobile phones an integral part of everyone’s daily lives. With the exponential advent of this technology to handle work on chat applications for prolonged hours, there has been a great increase in the interconnectivity of different sections of society, both economically and demographically. Existing chat applications provide in-built search engines that are competent in handling text searches but cannot search for different types of media, both visual and audible, which may be present in the chat. This paper proposes a novel approach that allows chat applications to use an inbuilt media search engine that performs searches for all the disparate media that the chat holds, using keywords. The machine learning model detects the objects from the media files and maps those objects’ keywords to the list of images. These keywords may be any of the objects that can be detected in those media files. Say, a user searches for the keyword ‘Table’ in the search engine, and he gets all the images having tables. This feature saves time for the user as no manual work is required to search for any media exchanged in the chat by scrolling and searching in case of many media files. This idea blooms out from within the feedback that the real-world audience has provided when asked for their expectations from a “perfect” chat application. The entire study associated with this paper conforms with the problem statement and guarantees the user a more comfortable and helpful experience while using the proposed feature. The proposed method uses TensorFlow-Lite and Google Machine Learning (ML) Kit’s Image Labelling APIs to detect the keywords that together characterize the media present in the chat. This method is found to be performing accurately for all types of media (especially photos) when manually tested with real-world data.