当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep reinforcement learning for building honeypots against runtime DoS attack
International Journal of Intelligent Systems ( IF 7 ) Pub Date : 2021-10-01 , DOI: 10.1002/int.22708
Selvakumar Veluchamy 1 , Ruba Soundar Kathavarayan 2
Affiliation  

Honeypot is a network environment utilized to protect proper network sources from attacks. Honeypot makes an environment that attracts the attacker to pay their operations to steal sources. Denial of Service (DoS) attacks are efficiently noticed using the proposed honeypot method. The issues of the previous technique are that the DoS attack is a malicious act with the goal of interrupting the access to a computer network. The result of the DoS attack can cause the computers on the network to squander their resources to serve illegitimate requests that result in a disruption of the network's services to legitimate users. To overcome these challenges this method is proposed. In this manuscript, the Deep Adaptive Reinforcement Learning for Honeypots (DARLH) is proposed. Here, honeypot environment, the proposed DARLHs system implements Deep Adaptive Reinforcement Learning (DARL) with Intrusion Detection System (IDS) agents and Deep Recurrent Neural Network (DRNN) with IDS agent for observing multiruntime DoS attack. In the next level, the system creates DRNN and DARL IDS agent integration modules for effective runtime attack detections. The Knowledge Data Discovery data set pattern, UNSW-NB20, and Bot-IoT data sets are used to the scenario of DoS attack. The method is executed in Python 3.7. The experimental outcomes are likened through different existing methods, such as Game and Naïve-Bayes Honeypot, Block Chain Honeypot, and Recurrent Neural Network-based Signature Generation and Detection. The proposed method is compared with External DoS Attack, Internal DoS attack, Brute-force attack, DoS attack, Web attack, and Botnet attacks with the existing methods. From the comparison, the proposed method offers 5%–10% better outcomes than another existing method. Lastly, the test results determine that the proposed method performance is most efficient with another existing system.

中文翻译:

用于构建蜜罐以抵御运行时 DoS 攻击的深度强化学习

蜜罐是一种网络环境,用于保护适当的网络资源免受攻击。蜜罐创造了一个环境,吸引攻击者支付他们的操作来窃取资源。使用提出的蜜罐方法可以有效地发现拒绝服务 (DoS) 攻击。先前技术的问题在于 DoS 攻击是一种恶意行为,其目的是中断对计算机网络的访问。DoS 攻击的结果可能导致网络上的计算机浪费资源来处理非法请求,从而导致网络对合法用户的服务中断。为了克服这些挑战,提出了这种方法。在这份手稿中,提出了蜜罐深度自适应强化学习 (DARLH)。这里,蜜罐环境,所提出的 DARLHs 系统使用入侵检测系统 (IDS) 代理实现深度自适应强化学习 (DARL),并使用 IDS 代理实现深度递归神经网络 (DRNN),以观察多运行时 DoS 攻击。在下一个级别,系统创建 DRNN 和 DARL IDS 代理集成模块,以进行有效的运行时攻击检测。Knowledge Data Discovery 数据集模式、UNSW-NB20 和 Bot-IoT 数据集用于 DoS 攻击的场景。该方法在 Python 3.7 中执行。实验结果通过不同的现有方法进行比较,例如游戏和朴素贝叶斯蜜罐、区块链蜜罐和基于递归神经网络的签名生成和检测。将所提出的方法与外部 DoS 攻击、内部 DoS 攻击、蛮力攻击、DoS 攻击、Web 攻击、和使用现有方法的僵尸网络攻击。通过比较,所提出的方法比另一种现有方法提供了 5%–10% 更好的结果。最后,测试结果确定所提出的方法性能在另一个现有系统中是最有效的。
更新日期:2021-10-01
down
wechat
bug