Jacobo Roa-Vicens, Y. Xu, Ricardo Silva, D. Mandic
{"title":"高维极限序书模型的图和张量训练递归神经网络","authors":"Jacobo Roa-Vicens, Y. Xu, Ricardo Silva, D. Mandic","doi":"10.1145/3533271.3561710","DOIUrl":null,"url":null,"abstract":"Recurrent neural networks (RNNs) have proven to be particularly effective for the paradigms of learning and modelling time series. However, sequential data of high dimensions are considerably more difficult and computationally expensive to model, as the number of parameters required to train the RNN grows exponentially with data dimensionality. This is also the case with time series from limit order books, the electronic registries where prices of securities are formed in public markets. To this end, tensorization of neural networks provides an efficient method to reduce the number of model parameters, and has been applied successfully to high-dimensional series such as video sequences and financial time series, for example, using tensor-train RNNs (TTRNNs). However, such TTRNNs suffer from a number of shortcomings, including: (i) model sensitivity to the ordering of core tensor contractions; (ii) training sensitivity to weight initialization; and (iii) exploding or vanishing gradient problems due to the recurrent propagation through the tensor-train topology. Recent studies showed that embedding a multi-linear graph filter to model RNN states (Recurrent Graph Tensor Network, RGTN) provides enhanced flexibility and expressive power to tensor networks, while mitigating the shortcomings of TTRNNs. In this paper, we demonstrate the advantages arising from the use of graph filters to model limit order book sequences of high dimension as compared with the state-of-the-art benchmarks. It is shown that the combination of the graph module (to mitigate problematic gradients) with the radial structure (to make the tensor network architecture flexible) results in substantial improvements in output variance, training time and number of parameters required, without any sacrifice in accuracy.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph and tensor-train recurrent neural networks for high-dimensional models of limit order books\",\"authors\":\"Jacobo Roa-Vicens, Y. Xu, Ricardo Silva, D. Mandic\",\"doi\":\"10.1145/3533271.3561710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recurrent neural networks (RNNs) have proven to be particularly effective for the paradigms of learning and modelling time series. However, sequential data of high dimensions are considerably more difficult and computationally expensive to model, as the number of parameters required to train the RNN grows exponentially with data dimensionality. This is also the case with time series from limit order books, the electronic registries where prices of securities are formed in public markets. To this end, tensorization of neural networks provides an efficient method to reduce the number of model parameters, and has been applied successfully to high-dimensional series such as video sequences and financial time series, for example, using tensor-train RNNs (TTRNNs). However, such TTRNNs suffer from a number of shortcomings, including: (i) model sensitivity to the ordering of core tensor contractions; (ii) training sensitivity to weight initialization; and (iii) exploding or vanishing gradient problems due to the recurrent propagation through the tensor-train topology. Recent studies showed that embedding a multi-linear graph filter to model RNN states (Recurrent Graph Tensor Network, RGTN) provides enhanced flexibility and expressive power to tensor networks, while mitigating the shortcomings of TTRNNs. In this paper, we demonstrate the advantages arising from the use of graph filters to model limit order book sequences of high dimension as compared with the state-of-the-art benchmarks. It is shown that the combination of the graph module (to mitigate problematic gradients) with the radial structure (to make the tensor network architecture flexible) results in substantial improvements in output variance, training time and number of parameters required, without any sacrifice in accuracy.\",\"PeriodicalId\":134888,\"journal\":{\"name\":\"Proceedings of the Third ACM International Conference on AI in Finance\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third ACM International Conference on AI in Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3533271.3561710\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Graph and tensor-train recurrent neural networks for high-dimensional models of limit order books
Recurrent neural networks (RNNs) have proven to be particularly effective for the paradigms of learning and modelling time series. However, sequential data of high dimensions are considerably more difficult and computationally expensive to model, as the number of parameters required to train the RNN grows exponentially with data dimensionality. This is also the case with time series from limit order books, the electronic registries where prices of securities are formed in public markets. To this end, tensorization of neural networks provides an efficient method to reduce the number of model parameters, and has been applied successfully to high-dimensional series such as video sequences and financial time series, for example, using tensor-train RNNs (TTRNNs). However, such TTRNNs suffer from a number of shortcomings, including: (i) model sensitivity to the ordering of core tensor contractions; (ii) training sensitivity to weight initialization; and (iii) exploding or vanishing gradient problems due to the recurrent propagation through the tensor-train topology. Recent studies showed that embedding a multi-linear graph filter to model RNN states (Recurrent Graph Tensor Network, RGTN) provides enhanced flexibility and expressive power to tensor networks, while mitigating the shortcomings of TTRNNs. In this paper, we demonstrate the advantages arising from the use of graph filters to model limit order book sequences of high dimension as compared with the state-of-the-art benchmarks. It is shown that the combination of the graph module (to mitigate problematic gradients) with the radial structure (to make the tensor network architecture flexible) results in substantial improvements in output variance, training time and number of parameters required, without any sacrifice in accuracy.