Polynomial Neural Networks for improved AI transparency: An analysis of their inherent explainability (operational rationale) capabilities

IF 3 Q2 ENGINEERING, CHEMICAL
Donovan Chaffart , Yue Yuan
{"title":"Polynomial Neural Networks for improved AI transparency: An analysis of their inherent explainability (operational rationale) capabilities","authors":"Donovan Chaffart ,&nbsp;Yue Yuan","doi":"10.1016/j.dche.2025.100230","DOIUrl":null,"url":null,"abstract":"<div><div>The demand for reliable Artificial Intelligence (AI) models within critical domains such as Chemical Engineering has garnered significant attention towards the use and development of transparent AI methodologies. Nevertheless, the field of AI transparency has received an uneven level of attention, such that crucial aspects like <em>explainability</em> (i.e., the transparency of the AI's operational rationales) have remained understudied. To address this challenge, this study investigates the inherent <em>explainability</em> capabilities of Polynomial Neural Networks (PNNs) for applications within Chemical Engineering. PNNs, which implement higher-order polynomials in lieu of linear expressions within their hidden layer neurons, are inherently nonlinear, and thus do not require an activation function to accurately capture the behavior of a system. Accordingly, these neural networks provide continuous, closed-form algebraic expressions that can be used to ascertain the contributions of individual features in the AI architecture towards the network operational behavior. In order to study this behavior, the PNN method was adopted in this work to capture the relationships of noiseless and noisy data derived according to simple mathematical expressions. The PNN polynomials were then extracted and examined to highlight the insights they provide regarding the system operational rationales. The PNN method was furthermore applied to capture the behavior of a circulating fluidized bed reactor to fully showcase the <em>explainative</em> capability of this method within a Chemical Engineering application. These studies highlight the intrinsic <em>explainability</em> capabilities of PNNs and demonstrated their potential for reliable AI implementations for applications in Chemical Engineering.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"15 ","pages":"Article 100230"},"PeriodicalIF":3.0000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508125000146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The demand for reliable Artificial Intelligence (AI) models within critical domains such as Chemical Engineering has garnered significant attention towards the use and development of transparent AI methodologies. Nevertheless, the field of AI transparency has received an uneven level of attention, such that crucial aspects like explainability (i.e., the transparency of the AI's operational rationales) have remained understudied. To address this challenge, this study investigates the inherent explainability capabilities of Polynomial Neural Networks (PNNs) for applications within Chemical Engineering. PNNs, which implement higher-order polynomials in lieu of linear expressions within their hidden layer neurons, are inherently nonlinear, and thus do not require an activation function to accurately capture the behavior of a system. Accordingly, these neural networks provide continuous, closed-form algebraic expressions that can be used to ascertain the contributions of individual features in the AI architecture towards the network operational behavior. In order to study this behavior, the PNN method was adopted in this work to capture the relationships of noiseless and noisy data derived according to simple mathematical expressions. The PNN polynomials were then extracted and examined to highlight the insights they provide regarding the system operational rationales. The PNN method was furthermore applied to capture the behavior of a circulating fluidized bed reactor to fully showcase the explainative capability of this method within a Chemical Engineering application. These studies highlight the intrinsic explainability capabilities of PNNs and demonstrated their potential for reliable AI implementations for applications in Chemical Engineering.
提高人工智能透明度的多项式神经网络:对其内在可解释性(操作原理)能力的分析
在化学工程等关键领域,对可靠的人工智能(AI)模型的需求已经引起了人们对透明人工智能方法的使用和开发的极大关注。然而,人工智能透明度领域受到的关注程度参差不齐,诸如可解释性(即人工智能操作原理的透明度)等关键方面仍未得到充分研究。为了解决这一挑战,本研究探讨了多项式神经网络(PNNs)在化学工程应用中的固有可解释性能力。pnn在其隐藏层神经元中实现高阶多项式代替线性表达式,其本质上是非线性的,因此不需要激活函数来准确捕获系统的行为。因此,这些神经网络提供连续的、封闭形式的代数表达式,可用于确定人工智能架构中各个特征对网络运行行为的贡献。为了研究这种行为,本文采用PNN方法捕捉由简单数学表达式推导出的无噪声和有噪声数据之间的关系。然后提取并检查PNN多项式,以突出它们提供的关于系统运行原理的见解。PNN方法进一步应用于捕获循环流化床反应器的行为,以充分展示该方法在化学工程应用中的解释能力。这些研究强调了pnn内在的可解释性能力,并展示了它们在化学工程应用中可靠的人工智能实现的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信