当前位置: X-MOL 学术arXiv.cs.LO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Provably-Robust Runtime Monitoring of Neuron Activation Patterns
arXiv - CS - Logic in Computer Science Pub Date : 2020-11-24 , DOI: arxiv-2011.11959
Chih-Hong Cheng

For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worst-case estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a fine-grained decision on the neuron value interval.

中文翻译:

鲁棒的运行时监视神经元激活模式

对于要在安全关键型自动驾驶任务中使用的深度神经网络(DNN),如果DNN的输入与DNN训练中使用的数据相似,则希望在运行时间中进行监控。尽管最近监测DNN激活模式的结果由于从训练数据集中构建了一个抽象而提供了可靠的保证,但是由于轻微的输入扰动而减少误报一直是成功适应这些技术的问题。我们通过在监视器构造过程中集成形式的符号推理来应对这一挑战。在应用抽象函数构建监视器之前,该算法会对输入(或特征)受到扰动的神经元值进行最坏的声音估计。
更新日期:2020-11-25
down
wechat
bug