深度神经网络可解释性方法汇总,附Tensorflow代码实现
副问题[/!--empirenews.page--]
领略神经收集:人们一向认为深度进修可表明性较弱。然而,领略神经收集的研究一向也没有遏制过,本文就来先容几种神经收集的可表明性要领,并配有可以或许在Jupyter下运行的代码毗连。 Activation Maximization通过激活最化来表明深度神经收集的要领一共有两种,详细如下: 1.1 Activation Maximization (AM) 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.1%20Activation%20Maximization.ipynb 1.2 Performing AM in Code Space 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.3%20Performing%20AM%20in%20Code%20Space.ipynb Layer-wise Relevance Propagation层偏向的关联撒播,一共有5种可表明要领。Sensitivity Analysis、Simple Taylor Decomposition、Layer-wise Relevance Propagation、Deep Taylor Decomposition、DeepLIFT。它们的处理赏罚要领是:先通过敏感性说明引入关联分数的观念,操作简朴的Taylor Decomposition试探根基的关联解析,进而成立各类分层的关联撒播要领。详细如下: 2.1 Sensitivity Analysis 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.1%20Sensitivity%20Analysis.ipynb 2.2 Simple Taylor Decomposition 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.2%20Simple%20Taylor%20Decomposition.ipynb 2.3 Layer-wise Relevance Propagation 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynb http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%282%29.ipynb 2.4 Deep Taylor Decomposition 相干代码如下: http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%281%29.ipynb http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%282%29.ipynb 2.5 DeepLIFT 相干代码如下: (编辑:河北网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |