Dafni Rose, K. Vijayakumar, D. Kirubakaran, R. Pugalenthi, Gotti Balayaswantasaichowdary
{"title":"Neural Machine Translation Using Attention","authors":"Dafni Rose, K. Vijayakumar, D. Kirubakaran, R. Pugalenthi, Gotti Balayaswantasaichowdary","doi":"10.1109/ICECONF57129.2023.10083569","DOIUrl":null,"url":null,"abstract":"Machine Translation pertains to translation of one natural language to other by using automated computing. The most common method for dealing with the machine translation problem is Statistical machine translation. This method is convenient for language pairs with similar grammatical structures even so it taken vast datasets.N evertheless, the conventional models do not perform well for languages without similar grammar and contextual meaning. Lately this problemhas been resolved by the neural machine translation (NMT) that has proved to be an effective curative. Only a little amount of data is required for training in NMT and it can translate only a small number of training words. A fixed-length vector is used to identify the important words that contribute for the translation of text, and assigns weights to each word in our proposed system. The Encoder-Decoder architecture with Long- Term and Short- Term Memory (LSTM) Neural Network and trained modelsare employed by calling the previous sequences and states. The proposed model ameliorates translation performance with attention vector and by returning the sequences of previous states unlike LSTM.English-Hindi sentences corpus data for implementing a Model with attention and without attention is considered here. By evaluating the results, the proposed solution, overcomes complexity of training a Neural Network and increases translation performance.","PeriodicalId":436733,"journal":{"name":"2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECONF57129.2023.10083569","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Machine Translation pertains to translation of one natural language to other by using automated computing. The most common method for dealing with the machine translation problem is Statistical machine translation. This method is convenient for language pairs with similar grammatical structures even so it taken vast datasets.N evertheless, the conventional models do not perform well for languages without similar grammar and contextual meaning. Lately this problemhas been resolved by the neural machine translation (NMT) that has proved to be an effective curative. Only a little amount of data is required for training in NMT and it can translate only a small number of training words. A fixed-length vector is used to identify the important words that contribute for the translation of text, and assigns weights to each word in our proposed system. The Encoder-Decoder architecture with Long- Term and Short- Term Memory (LSTM) Neural Network and trained modelsare employed by calling the previous sequences and states. The proposed model ameliorates translation performance with attention vector and by returning the sequences of previous states unlike LSTM.English-Hindi sentences corpus data for implementing a Model with attention and without attention is considered here. By evaluating the results, the proposed solution, overcomes complexity of training a Neural Network and increases translation performance.