Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche
{"title":"神经网络查询语言","authors":"Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche","doi":"arxiv-2408.10362","DOIUrl":null,"url":null,"abstract":"We lay the foundations for a database-inspired approach to interpreting and\nunderstanding neural network models by querying them using declarative\nlanguages. Towards this end we study different query languages, based on\nfirst-order logic, that mainly differ in their access to the neural network\nmodel. First-order logic over the reals naturally yields a language which views\nthe network as a black box; only the input--output function defined by the\nnetwork can be queried. This is essentially the approach of constraint query\nlanguages. On the other hand, a white-box language can be obtained by viewing\nthe network as a weighted graph, and extending first-order logic with summation\nover weight terms. The latter approach is essentially an abstraction of SQL. In\ngeneral, the two approaches are incomparable in expressive power, as we will\nshow. Under natural circumstances, however, the white-box approach can subsume\nthe black-box approach; this is our main result. We prove the result concretely\nfor linear constraint queries over real functions definable by feedforward\nneural networks with a fixed number of hidden layers and piecewise linear\nactivation functions.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"131 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Query languages for neural networks\",\"authors\":\"Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche\",\"doi\":\"arxiv-2408.10362\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We lay the foundations for a database-inspired approach to interpreting and\\nunderstanding neural network models by querying them using declarative\\nlanguages. Towards this end we study different query languages, based on\\nfirst-order logic, that mainly differ in their access to the neural network\\nmodel. First-order logic over the reals naturally yields a language which views\\nthe network as a black box; only the input--output function defined by the\\nnetwork can be queried. This is essentially the approach of constraint query\\nlanguages. On the other hand, a white-box language can be obtained by viewing\\nthe network as a weighted graph, and extending first-order logic with summation\\nover weight terms. The latter approach is essentially an abstraction of SQL. In\\ngeneral, the two approaches are incomparable in expressive power, as we will\\nshow. Under natural circumstances, however, the white-box approach can subsume\\nthe black-box approach; this is our main result. We prove the result concretely\\nfor linear constraint queries over real functions definable by feedforward\\nneural networks with a fixed number of hidden layers and piecewise linear\\nactivation functions.\",\"PeriodicalId\":501208,\"journal\":{\"name\":\"arXiv - CS - Logic in Computer Science\",\"volume\":\"131 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Logic in Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.10362\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Logic in Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We lay the foundations for a database-inspired approach to interpreting and
understanding neural network models by querying them using declarative
languages. Towards this end we study different query languages, based on
first-order logic, that mainly differ in their access to the neural network
model. First-order logic over the reals naturally yields a language which views
the network as a black box; only the input--output function defined by the
network can be queried. This is essentially the approach of constraint query
languages. On the other hand, a white-box language can be obtained by viewing
the network as a weighted graph, and extending first-order logic with summation
over weight terms. The latter approach is essentially an abstraction of SQL. In
general, the two approaches are incomparable in expressive power, as we will
show. Under natural circumstances, however, the white-box approach can subsume
the black-box approach; this is our main result. We prove the result concretely
for linear constraint queries over real functions definable by feedforward
neural networks with a fixed number of hidden layers and piecewise linear
activation functions.