Sandeep Kumar, Keerthi Gudiseva, Aalla Iswarya, S. Rani, K. Prasad, Yogesh Kumar Sharma
{"title":"基于RNN架构的音乐自动生成系统","authors":"Sandeep Kumar, Keerthi Gudiseva, Aalla Iswarya, S. Rani, K. Prasad, Yogesh Kumar Sharma","doi":"10.1109/ICTACS56270.2022.9988652","DOIUrl":null,"url":null,"abstract":"Musicians or artists build on what has been generated utilizing the system and bring their original work. Music composition is an exciting topic that helps us to realize the composer's creativity. With the rapid improvement of the era, the form of music has ended up extra various and unfolds faster. The cost of making music, on the other hand, remains very high. Deep learning should really be capable of producing music that sounds like it was made by a person if it has sufficient data and the right algorithm. The purpose of this research is to set up a track-based and machine-learning-based device that can automatically put together songs. The device is composed of a set of piano MIDI records from the MAESTRO dataset that are used to build song segments. Fully connected and convolutional layers take advantage of the rich features in the frequency area to improve the quality of the music that is made.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic Music Generation System based on RNN Architecture\",\"authors\":\"Sandeep Kumar, Keerthi Gudiseva, Aalla Iswarya, S. Rani, K. Prasad, Yogesh Kumar Sharma\",\"doi\":\"10.1109/ICTACS56270.2022.9988652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Musicians or artists build on what has been generated utilizing the system and bring their original work. Music composition is an exciting topic that helps us to realize the composer's creativity. With the rapid improvement of the era, the form of music has ended up extra various and unfolds faster. The cost of making music, on the other hand, remains very high. Deep learning should really be capable of producing music that sounds like it was made by a person if it has sufficient data and the right algorithm. The purpose of this research is to set up a track-based and machine-learning-based device that can automatically put together songs. The device is composed of a set of piano MIDI records from the MAESTRO dataset that are used to build song segments. Fully connected and convolutional layers take advantage of the rich features in the frequency area to improve the quality of the music that is made.\",\"PeriodicalId\":385163,\"journal\":{\"name\":\"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTACS56270.2022.9988652\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTACS56270.2022.9988652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Music Generation System based on RNN Architecture
Musicians or artists build on what has been generated utilizing the system and bring their original work. Music composition is an exciting topic that helps us to realize the composer's creativity. With the rapid improvement of the era, the form of music has ended up extra various and unfolds faster. The cost of making music, on the other hand, remains very high. Deep learning should really be capable of producing music that sounds like it was made by a person if it has sufficient data and the right algorithm. The purpose of this research is to set up a track-based and machine-learning-based device that can automatically put together songs. The device is composed of a set of piano MIDI records from the MAESTRO dataset that are used to build song segments. Fully connected and convolutional layers take advantage of the rich features in the frequency area to improve the quality of the music that is made.