{"title":"基于预训练语言模型的维基百科文档语义搜索","authors":"Tharun P Tharun P","doi":"10.24247/ijcseitrdec20212","DOIUrl":null,"url":null,"abstract":"The previously trained on massive text corpora such as GPT-3 is powerful and has an open domain with more than 175 billion parameters. However, as our semantic search will make it possible to search with the keyword that will give you what it was searched for, it is still challenging for such models to train and get the accuracy level challenging model Coherently for prolonged passages of textual content, in particular at the same time as the models This focuses on the target area of the small figure. In the next few steps, the key formulas for domain precise content materials will become more and more complex. Wikipedia's semantic file search, to calculate the semantic relevance of language text, requires multiples of data set search. Which is, Important common sense and global knowledge on specific topics. By recommending Semantic Search Analysis (SSA), which is a fully specialized technique for representing text in the main superior domain obtained from Wikipedia. Using the previously trained strategy, we mainly construct the average value of the content of the text explicitly on the adaptive model from Wikipedia. Results display that our version outperforms different models.","PeriodicalId":185673,"journal":{"name":"International Journal of Computer Science Engineering and Information Technology Research","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semantic Search over Wikipedia Documents Meaning of Queries Based Pre-Trained Language Model\",\"authors\":\"Tharun P Tharun P\",\"doi\":\"10.24247/ijcseitrdec20212\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The previously trained on massive text corpora such as GPT-3 is powerful and has an open domain with more than 175 billion parameters. However, as our semantic search will make it possible to search with the keyword that will give you what it was searched for, it is still challenging for such models to train and get the accuracy level challenging model Coherently for prolonged passages of textual content, in particular at the same time as the models This focuses on the target area of the small figure. In the next few steps, the key formulas for domain precise content materials will become more and more complex. Wikipedia's semantic file search, to calculate the semantic relevance of language text, requires multiples of data set search. Which is, Important common sense and global knowledge on specific topics. By recommending Semantic Search Analysis (SSA), which is a fully specialized technique for representing text in the main superior domain obtained from Wikipedia. Using the previously trained strategy, we mainly construct the average value of the content of the text explicitly on the adaptive model from Wikipedia. Results display that our version outperforms different models.\",\"PeriodicalId\":185673,\"journal\":{\"name\":\"International Journal of Computer Science Engineering and Information Technology Research\",\"volume\":\"60 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Science Engineering and Information Technology Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24247/ijcseitrdec20212\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Science Engineering and Information Technology Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24247/ijcseitrdec20212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic Search over Wikipedia Documents Meaning of Queries Based Pre-Trained Language Model
The previously trained on massive text corpora such as GPT-3 is powerful and has an open domain with more than 175 billion parameters. However, as our semantic search will make it possible to search with the keyword that will give you what it was searched for, it is still challenging for such models to train and get the accuracy level challenging model Coherently for prolonged passages of textual content, in particular at the same time as the models This focuses on the target area of the small figure. In the next few steps, the key formulas for domain precise content materials will become more and more complex. Wikipedia's semantic file search, to calculate the semantic relevance of language text, requires multiples of data set search. Which is, Important common sense and global knowledge on specific topics. By recommending Semantic Search Analysis (SSA), which is a fully specialized technique for representing text in the main superior domain obtained from Wikipedia. Using the previously trained strategy, we mainly construct the average value of the content of the text explicitly on the adaptive model from Wikipedia. Results display that our version outperforms different models.