Neural Network Heuristic Functions: Taking Confidence into Account

Daniel Heller, Patrick Ferber, Julian Bitterwolf, Matthias Hein, Jörg Hoffmann
{"title":"Neural Network Heuristic Functions: Taking Confidence into Account","authors":"Daniel Heller, Patrick Ferber, Julian Bitterwolf, Matthias Hein, Jörg Hoffmann","doi":"10.1609/socs.v15i1.21771","DOIUrl":null,"url":null,"abstract":"Neural networks (NN) are increasingly investigated in AI\nPlanning, and are used successfully to learn heuristic functions.\nNNs commonly not only predict a value, but also output\na confidence in this prediction. From the perspective of\nheuristic search with NN heuristics, it is a natural idea to\ntake this into account, e.g. falling back to a standard heuristic\nwhere confidence is low. We contribute an empirical study\nof this idea. We design search methods which prune nodes,\nor switch between search queues, based on the confidence\nof NNs. We furthermore explore the possibility of \nout-of-distribution (OOD) training, which tries to reduce the\noverconfidence of NNs on inputs different to the training distribution.\nIn experiments on IPC benchmarks, we find that our\nsearch methods improve coverage over standard methods, and\nthat OOD training has the desired effect in terms of prediction\naccuracy and confidence, though its impact on search seems\nmarginal.","PeriodicalId":425645,"journal":{"name":"Symposium on Combinatorial Search","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Combinatorial Search","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/socs.v15i1.21771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Neural networks (NN) are increasingly investigated in AI Planning, and are used successfully to learn heuristic functions. NNs commonly not only predict a value, but also output a confidence in this prediction. From the perspective of heuristic search with NN heuristics, it is a natural idea to take this into account, e.g. falling back to a standard heuristic where confidence is low. We contribute an empirical study of this idea. We design search methods which prune nodes, or switch between search queues, based on the confidence of NNs. We furthermore explore the possibility of out-of-distribution (OOD) training, which tries to reduce the overconfidence of NNs on inputs different to the training distribution. In experiments on IPC benchmarks, we find that our search methods improve coverage over standard methods, and that OOD training has the desired effect in terms of prediction accuracy and confidence, though its impact on search seems marginal.
考虑置信度的神经网络启发式函数
神经网络在人工智能规划中得到越来越多的研究,并成功地用于启发式函数的学习。神经网络通常不仅预测一个值,而且还输出该预测的置信度。从神经网络启发式搜索的角度来看,考虑到这一点是一个很自然的想法,例如,在置信度较低的情况下,退回到标准启发式。我们对这一观点进行了实证研究。我们设计了基于神经网络置信度的搜索方法来修剪节点,或者在搜索队列之间切换。我们进一步探索了分布外(OOD)训练的可能性,它试图减少神经网络在不同于训练分布的输入上的过度置信度。在IPC基准的实验中,我们发现我们的搜索方法比标准方法提高了覆盖率,并且OOD训练在预测准确性和置信度方面具有预期的效果,尽管它对搜索的影响似乎很小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信