Review
BibTex RIS Cite

Reinforcement Learning Applications in Cyber Security: A Review

Year 2023, Volume: 27 Issue: 2, 481 - 503, 30.04.2023
https://doi.org/10.16984/saufenbilder.1237742

Abstract

In the modern age we live in, the internet has become an essential part of our daily life. A significant portion of our personal data is stored online and organizations run their business online. In addition, with the development of the internet, many devices such as autonomous systems, investment portfolio tools and entertainment tools in our homes and workplaces have become or are becoming intelligent. In parallel with this development, cyberattacks aimed at damaging smart systems are increasing day by day. As cyberattack methods become more sophisticated, the damage done by attackers is increasing exponentially. Traditional computer algorithms may be insufficient against these attacks in the virtual world. Therefore, artificial intelligence-based methods are needed. Reinforcement Learning (RL), a machine learning method, is used in the field of cyber security. Although RL for cyber security is a new topic in the literature, studies are carried out to predict, prevent and stop attacks. In this study; we reviewed the literature on RL's penetration testing, intrusion detection systems (IDS) and cyberattacks in cyber security.

References

  • [1] B. von Solms, R. von Solms, “Cybersecurity and information security – what goes where?,” Information & Computer Security, vol. 26, no. 1, pp. 2–9, 2018.
  • [2] Z. Guan, J. Li, L. Wu, Y. Zhang, J. Wu, X. Du, “Achieving efficient and secure data acquisition for cloud-supported internet of things in smart grid,” IEEE Internet Things Journal, vol. 4, no. 6, pp. 1934–1944, 2017.
  • [3] J.-H. Li, “Cyber security meets artificial intelligence: a survey,” Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 12, pp. 1462–1474, 2018.
  • [4] T. T. Nguyen, V. J. Reddi, “Deep reinforcement learning for cyber security,” arXiv [cs.CR], 2019.
  • [5] N. D. Nguyen, T. T. Nguyen, H. Nguyen, D. Creighton, S. Nahavandi, “Review, analysis and design of a comprehensive deep reinforcement learning framework,” arXiv [cs.LG], 2020.
  • [6] N. D. Nguyen, T. Nguyen, S. Nahavandi, “System design perspective for human-level agents using deep reinforcement learning: A survey,” IEEE Access, vol. 5, pp. 27091–27102, 2017.
  • [7] M. Riedmiller, T. Gabel, R. Hafner, S. Lange, “Reinforcement learning for robot soccer,” Autonomous Robots, vol. 27, no. 1, pp. 55–73, 2009.
  • [8] K. Mülling, J. Kober, O. Kroemer, J. Peters, “Learning to select and generalize striking movements in robot table tennis,” The International Journal of Robotics Research, vol. 32, no. 3, pp. 263–279, 2013.
  • [9] T. G. Thuruthel, E. Falotico, F. Renda, C. Laschi, “Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators,” IEEE Transactions on Robotics, vol. 35, no. 1, pp. 124–134, 2019.
  • [10] I.Arel, C. Liu, T. Urbanik, A. G. Kohls, “Reinforcement learning-based multi-agent system for network traffic signal control,” IET Intelligent Transport Systems, vol. 4, no. 2, p. 128, 2010.
  • [11] J. Jin, C. Song, H. Li, K. Gai, J. Wang, W. Zhang, “Real-time bidding with multi-agent reinforcement learning in display advertising,” in Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018.
  • [12] M. E. Taylor, N. Carboni, A. Fachantidis, I. Vlahavas, L. Torrey, “Reinforcement learning agents providing advice in complex video games,” Connection Science, vol. 26, no. 1, pp. 45–63, 2014.
  • [13] C. Amato, G. Shani, “High-level reinforcement learning in strategy games”, In AAMAS Vol. 10, pp. 75-82, 2010.
  • [14] M. Jaderberg, W.M Czarnecki, I. Dunning, L. Marris, G. Lever, A.G. Castaneda, C. Beattıe, N. C. Rabinowıtz, A.S. Morcos, A. Ruderman, N. Sonnerat, T. Green, L. Deason, J. Z. Leibo, D. Silver, D.Hassabis, K. Kavukcuoglu, T. Graepel, “Human-level performance in 3D multiplayer games with population-based reinforcement learning,” Science, vol. 364, no. 6443, pp. 859–865, 2019.
  • [15] T. Liu, B. Huang, Z. Deng, H. Wang, X. Tang, X. Wang, D. Cao, “Heuristics‐oriented overtaking decision making for autonomous vehicles using reinforcement learning,” IET Electrical Systems in Transportation, vol. 10, no. 4, pp. 417–424, 2020.
  • [16] W. Gao, A. Odekunle, Y. Chen, Z.-P. Jiang, “Predictive cruise control of connected and autonomous vehicles via reinforcement learning,” IET Control Theory Applications, vol. 13, no. 17, pp. 2849–2855, 2019.
  • [17] F. Richter, R. K. Orosco, M. C. Yip, “Open-sourced reinforcement learning environments for surgical robotics,” arXiv [cs.RO], 2019.
  • [18] C. Shin, P. W. Ferguson, S. A. Pedram, J. Ma, E. P. Dutson, J. Rosen, “Autonomous tissue manipulation via surgical robot using learning based model predictive control,” in 2019 International Conference on Robotics and Automation (ICRA), 2019.
  • [19] H. Snyder, “Literature review as a research methodology: An overview and guidelines”. Journal of business research, 104, 333-339, 2019.
  • [20] P. Davies, “The relevance of systematic reviews to educational policy and practice”, Oxford review of education, 26(3-4), 365-378, 2000.
  • [21] F. M. Zennaro, L. Erdodi, “Modeling penetration testing with reinforcement learning using capture-the-flag challenges and tabular Q-learning”, arXiv preprint arXiv:2005.12632, 2020.
  • [22] R. S. Sutton, A. G. Barto, “Reinforcement Learning: An Introduction”, 2nd ed. Cambridge, MA: Bradford Books, 2018.
  • [23] V. Mnih, V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, [24] M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg D. Hassabis., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  • [25] V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau, “An introduction to deep reinforcement learning,” Foundations and Trends® in Machine Learning, vol. 11, no. 3–4, pp. 219–354, 2018.
  • [26] A.Uprety, D. B. Rawat, “Reinforcement learning for IoT security: A comprehensive survey,” IEEE Internet of Things Journal, vol. 8, no. 11, pp. 8693–8706, 2021.
  • [27] S. P. K. Spielberg, R. B. Gopaluni, P. D. Loewen, “Deep reinforcement learning approaches for process control,” in 2017 6th International Symposium on Advanced Control of Industrial Processes (AdCONIP), 2017.
  • [28] H. Mao, M. Alizadeh, I. Menache, S. Kandula, “Resource management with deep reinforcement learning,” in Proceedings of the 15th ACM Workshop on Hot Topics in Networks, 2016.
  • [29] M. Vecerik T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, R. Thomas,T. Rothörl, T. Lampe, M. Riedmiller, “Leveraging demonstrations for deep Reinforcement Learning on robotics problems with sparse rewards,” arXiv [cs.AI], 2017.
  • [30] S. Gu, E. Holly, T. Lillicrap, S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017.
  • [31] M. C. Ghanem, T. M. Chen, “Reinforcement learning for intelligent penetration testing,” in 2018 Second World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), 2018.
  • [32] C. Sarraute, O. Buffet, J. Hoffmann, “POMDPs make better hackers: Accounting for uncertainty in penetration testing,” Twenty-Sixth AAAI Conference on Artificial Intelligence, vol. 26, no. 1, pp. 1816–1824, 2021.
  • [33] C. Sarraute, O. Buffet, J. Hoffmann, “Penetration Testing == POMDP Solving?,” arXiv [cs.AI], 2013.
  • [34] J. Hoffmann, “Simulated penetration testing: From ‘Dijkstra’ to ‘Turing Test++,’” Proceedings of the International Conference on Automated Planning and Scheduling, vol. 25, pp. 364–372, 2015.
  • [35] J. Schwartz, H. Kurniawati, “Autonomous Penetration Testing using Reinforcement Learning,” arXiv [cs.CR], 2019.
  • [36] M. C. Ghanem, T. M. Chen, “Reinforcement learning for efficient network penetration testing,” Information (Basel), vol. 11, no. 1, p. 6, 2019.
  • [37] A.Chowdhary, D. Huang, J. S. Mahendran, D. Romo, Y. Deng, A. Sabur, “Autonomous security analysis and penetration testing,” in 2020 16th International Conference on Mobility, Sensing and Networking (MSN), 2020.
  • [38] H. Nguyen, S. Teerakanok, A. Inomata, T. Uehara, “The proposal of double agent architecture using actor-critic algorithm for penetration testing,” in Proceedings of the 7th International Conference on Information Systems Security and Privacy, 2021.
  • [39] C. Neal, H. Dagdougui, A. Lodi, J. M. Fernandez, “Reinforcement learning based penetration testing of a microgrid control algorithm,” in 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), 2021.
  • [40] Y. Yang, X. Liu, “Behaviour-diverse automatic penetration testing: A curiosity-driven multi-objective deep Reinforcement Learning approach,” arXiv [cs.LG], 2022.
  • [41] X. Xu, T. Xie, “A reinforcement learning approach for host-based intrusion detection using sequences of system calls,” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 995–1003.
  • [42] S. Aljawarneh, M. Aldwairi, M. B. Yassein, “Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model,” Journal of Computational Science, vol. 25, pp. 152–160, 2018.
  • [43] X. Xu, “Sequential anomaly detection based on temporal-difference learning: Principles, models and case studies,” Applied Soft Computing, vol. 10, no. 3, pp. 859–867, 2010.
  • [44] B. Deokar, A. Hazarnis, “Intrusion detection system using log files and reinforcement learning”, International Journal of Computer Applications, vol. 45, no. 19, 28-35, 2012.
  • [45] S. Otoum, B. Kantarci, H. Mouftah, “Empowering reinforcement learning on big sensed data for intrusion detection,” in ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019.
  • [46] G. Caminero, M. Lopez-Martin, B. Carro, “Adversarial environment reinforcement learning algorithm for intrusion detection,” Computer Networks, vol. 159, pp. 96–109, 2019.
  • [47] K. Sethi, E. Sai Rupesh, R. Kumar, P. Bera, Y. Venu Madhav, “A context-aware robust intrusion detection system: a reinforcement learning-based approach,” International Journal of Information Security, vol. 19, no. 6, pp. 657–678, 2020.
  • [48] H. Alavizadeh, H. Alavizadeh, J. Jang-Jaccard, “Deep Q-learning based reinforcement learning approach for network intrusion detection,” Computers, vol. 11, no. 3, p. 41, 2022.
  • A. S. S. Alawsi, S. Kurnaz, “Quality of service system that is self-updating by intrusion detection systems using reinforcement learning,” Applied Nanoscience, 2022.
  • [49] X. Xu, Y. Sun, Huang, Z, “Defending DDoS attacks using hidden Markov models and cooperative reinforcement learning”, In Pacific-Asia Workshop on Intelligence and Security Informatics (pp. 196-207). Springer, Berlin, Heidelberg, 2007.
  • [50] K. Malialis, D. Kudenko, “Multiagent Router Throttling: Decentralized coordinated response against DDoS attacks,” In Twenty-Fifth IAAI Conference, vol. 27, no. 2, pp. 1551–1556, 2013.
  • [51] K. Malialis, D. Kudenko, “Distributed response to network intrusions using multiagent reinforcement learning,” Engineering Applications of Artificial Intelligence, vol. 41, pp. 270–284, 2015.
  • [52] S. Shamshirband, A. Patel, N. B. Anuar, M. L. M. Kiah, A. Abraham, “Cooperative game theoretic approach using fuzzy Q-learning for detecting and preventing intrusions in wireless sensor networks,” Engineering Applications of Artificial Intelligence, vol. 32, pp. 228–241, 2014.
  • [53] K. A. Simpson, S. Rogers, D. P. Pezaros, “Per-host DDoS mitigation by direct-control reinforcement learning,” IEEE Transactions on Network and Service Management, vol. 17, no. 1, pp. 103–117, 2020.
  • [54] Y. Feng, J. Li, T. Nguyen, “Application-layer DDoS defense with reinforcement learning,” in 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS), 2020.
  • [55] K. Grover, A. Lim, Q. Yang, “Jamming and anti-jamming techniques in wireless networks: a survey,” International Journal of Ad Hoc and Ubiquitous Computing, vol. 17, no. 4, p. 197, 2014.
  • [56] Y. Wu, B. Wang, K. J. R. Liu, T. C. Clancy, “Anti-jamming games in multi-channel cognitive radio networks,” IEEE journal on selected areas in communications, vol. 30, no. 1, pp. 4–15, 2012.
  • [57] S. Singh, A. Trivedi, “Anti-jamming in cognitive radio networks using reinforcement learning algorithms,” in 2012 Ninth International Conference on Wireless and Optical Communications Networks (WOCN), 2012.
  • [58] Y. Gwon, S. Dastangoo, C. Fossa, H. T. Kung, “Competing Mobile Network Game: Embracing antijamming and jamming strategies with reinforcement learning,” in 2013 IEEE Conference on Communications and Network Security (CNS), 2013.
  • [59] K. Dabcevic, A. Betancourt, L. Marcenaro, C. S. Regazzoni, “A fictitious play-based game-theoretical approach to alleviating jamming attacks for cognitive radios,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
  • [60] F. Slimeni, B. Scheers, Z. Chtourou, V. Le Nir, “Jamming mitigation in cognitive radio networks using a modified Q-learning algorithm,” in 2015 International Conference on Military Communications and Information Systems (ICMCIS), 2015.
  • [61] F. Slimeni, B. Scheers, Z. Chtourou, V. Le Nir, R. Attia, “Cognitive radio jamming mitigation using Markov decision process and reinforcement learning,” Procedia Computer Science, vol. 73, pp. 199–208, 2015.
  • [62] F. Slimeni, B. Scheers, Z. Chtourou, V. L. Nir, R. Attia, “A modified Q-learning algorithm to solve cognitive radio jamming attack,” International Journal of Embedded Systems, vol. 10, no. 1, p. 41, 2018.
  • [63] B. Wang, Y. Wu, K. J. R. Liu, T. C. Clancy, “An anti-jamming stochastic game for cognitive radio networks,” IEEE journal on selected areas in communications, vol. 29, no. 4, pp. 877–889, 2011.
  • [64] B. F. Lo, I. F. Akyildiz, “Multiagent jamming-resilient control channel game for cognitive radio ad hoc networks,” in 2012 IEEE International Conference on Communications (ICC), 2012.
  • [65] L. Xiao, Y. Li, J. Liu, Y. Zhao, “Power control with reinforcement learning in cooperative cognitive radio networks against jamming,” The Journal of Supercomputing, vol. 71, no. 9, pp. 3237–3257, 2015.
  • [66] G. Han, L. Xiao, H. V. Poor, “Two-dimensional anti-jamming communication based on deep reinforcement learning,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
  • [67] X. Liu, Y. Xu, L. Jia, Q. Wu, A. Anpalagan, “Anti-jamming communications using spectrum waterfall: A deep reinforcement learning approach,” IEEE Communications Letters, vol. 22, no. 5, pp. 998–1001, 2018.
  • [68] S. Machuzak, S. K. Jayaweera, “Reinforcement learning based anti-jamming with wideband autonomous cognitive radios,” in 2016 IEEE/CIC International Conference on Communications in China (ICCC), 2016.
  • [69] M. A. Aref, S. K. Jayaweera, S. Machuzak, “Multi-agent reinforcement learning based cognitive anti-jamming,” in 2017 IEEE Wireless Communications and Networking Conference (WCNC), 2017.
  • [70] F. Yao, L. Jia, “A collaborative multi-agent reinforcement learning anti-jamming algorithm in wireless networks,” IEEE wireless communications letters, vol. 8, no. 4, pp. 1024–1027, 2019.
  • [71] Pourranjbar, G. Kaddoum, A. Ferdowsi, W. Saad, “Reinforcement learning for deceiving reactive jammers in wireless networks,” IEEE Transactions on Communications, vol. 69, no. 6, pp. 3682–3697, 2021.
  • [72] H. Pirayesh, H. Zeng, “Jamming attacks and anti-jamming strategies in wireless networks: A comprehensive survey,” arXiv [cs.CR], 2021.
  • [73] X. Lu, D. Xu, L. Xiao, L. Wang, W. Zhuang, “Anti-jamming communication game for UAV-aided VANETs,” in GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017.
  • [74] L. Xiao, X. Lu, D. Xu, Y. Tang, L. Wang, W. Zhuang, “UAV relay in VANETs against smart jamming with reinforcement learning,” IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4087–4097, 2018.
  • [75] J. Peng, Z. Zhang, Q. Wu, B. Zhang, “Anti-jamming communications in UAV swarms: A reinforcement learning approach,” IEEE Access, vol. 7, pp. 180532–180543, 2019.
  • [76] Z. Li, Y. Lu, X. Li, Z. Wang, W. Qiao, Y. Liu, “UAV networks against multiple maneuvering smart jamming with knowledge-based reinforcement learning,” IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12289–12310, 2021.
  • [77] X. Lu, J. Jie, Z. Lin, L. Xiao, J. Li, Y. Zhang, “Reinforcement learning based energy efficient robot relay for unmanned aerial vehicles against smart jamming,” Science China Information Sciences, vol. 65, no. 1, 2022.
  • [78] L. Xiao, Y. Li, G. Liu, Q. Li, W. Zhuang, “Spoofing detection with reinforcement learning in wireless networks,” in 2015 IEEE Global Communications Conference (GLOBECOM), 2015.
  • [79] L. Xiao, Y. Li, G. Han, G. Liu, W. Zhuang, “PHY-layer spoofing detection with reinforcement learning in wireless networks,” IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 10037–10047, 2016.
  • [80] L. Xiao, C. Xie, T. Chen, H. Dai, H. V. Poor, “A mobile offloading game against smart attacks,” IEEE Access, vol. 4, pp. 2281–2291, 2016.
  • [81] J. Liu, L. Xiao, G. Liu, Y. Zhao, “Active authentication with reinforcement learning based on ambient radio signals,” Multimedia Tools and Applications, vol. 76, no. 3, pp. 3979–3998, 2017.
  • [82] S. Purkait, “Phishing counter measures and their effectiveness – literature review,” Information Management & Computer Security., vol. 20, no. 5, pp. 382–420, 2012.
  • [83] S. Smadi, N. Aslam, L. Zhang, “Detection of online phishing email using dynamic evolving neural network based on reinforcement learning,” Decision Support Systems, vol. 107, pp. 88–102, 2018.
  • [84] Y. Fang, C. Huang, Y. Xu, Y. Li, “RLXSS: Optimizing XSS detection model to defend against adversarial attacks based on reinforcement learning,” Future internet, vol. 11, no. 8, p. 177, 2019.
  • [85] Tariq, M. A. Sindhu, R. A. Abbasi, A. S. Khattak, O. Maqbool, G. F. Siddiqui, “Resolving cross-site scripting attacks through genetic algorithm and reinforcement learning,” Expert Systems with Applications., vol. 168, no. 114386, p. 114386, 2021.
  • [86] F. Caturano, G. Perrone, S. P. Romano, “Discovering reflected cross-site scripting vulnerabilities using a multiobjective reinforcement learning environment,” Computers & Security, vol. 103, no. 102204, p. 102204, 2021.
  • [87] L. Erdodi, Å. Å. Sommervoll, F. M. Zennaro, “Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning agents,” arXiv [cs.CR], 2021.
Year 2023, Volume: 27 Issue: 2, 481 - 503, 30.04.2023
https://doi.org/10.16984/saufenbilder.1237742

Abstract

References

  • [1] B. von Solms, R. von Solms, “Cybersecurity and information security – what goes where?,” Information & Computer Security, vol. 26, no. 1, pp. 2–9, 2018.
  • [2] Z. Guan, J. Li, L. Wu, Y. Zhang, J. Wu, X. Du, “Achieving efficient and secure data acquisition for cloud-supported internet of things in smart grid,” IEEE Internet Things Journal, vol. 4, no. 6, pp. 1934–1944, 2017.
  • [3] J.-H. Li, “Cyber security meets artificial intelligence: a survey,” Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 12, pp. 1462–1474, 2018.
  • [4] T. T. Nguyen, V. J. Reddi, “Deep reinforcement learning for cyber security,” arXiv [cs.CR], 2019.
  • [5] N. D. Nguyen, T. T. Nguyen, H. Nguyen, D. Creighton, S. Nahavandi, “Review, analysis and design of a comprehensive deep reinforcement learning framework,” arXiv [cs.LG], 2020.
  • [6] N. D. Nguyen, T. Nguyen, S. Nahavandi, “System design perspective for human-level agents using deep reinforcement learning: A survey,” IEEE Access, vol. 5, pp. 27091–27102, 2017.
  • [7] M. Riedmiller, T. Gabel, R. Hafner, S. Lange, “Reinforcement learning for robot soccer,” Autonomous Robots, vol. 27, no. 1, pp. 55–73, 2009.
  • [8] K. Mülling, J. Kober, O. Kroemer, J. Peters, “Learning to select and generalize striking movements in robot table tennis,” The International Journal of Robotics Research, vol. 32, no. 3, pp. 263–279, 2013.
  • [9] T. G. Thuruthel, E. Falotico, F. Renda, C. Laschi, “Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators,” IEEE Transactions on Robotics, vol. 35, no. 1, pp. 124–134, 2019.
  • [10] I.Arel, C. Liu, T. Urbanik, A. G. Kohls, “Reinforcement learning-based multi-agent system for network traffic signal control,” IET Intelligent Transport Systems, vol. 4, no. 2, p. 128, 2010.
  • [11] J. Jin, C. Song, H. Li, K. Gai, J. Wang, W. Zhang, “Real-time bidding with multi-agent reinforcement learning in display advertising,” in Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018.
  • [12] M. E. Taylor, N. Carboni, A. Fachantidis, I. Vlahavas, L. Torrey, “Reinforcement learning agents providing advice in complex video games,” Connection Science, vol. 26, no. 1, pp. 45–63, 2014.
  • [13] C. Amato, G. Shani, “High-level reinforcement learning in strategy games”, In AAMAS Vol. 10, pp. 75-82, 2010.
  • [14] M. Jaderberg, W.M Czarnecki, I. Dunning, L. Marris, G. Lever, A.G. Castaneda, C. Beattıe, N. C. Rabinowıtz, A.S. Morcos, A. Ruderman, N. Sonnerat, T. Green, L. Deason, J. Z. Leibo, D. Silver, D.Hassabis, K. Kavukcuoglu, T. Graepel, “Human-level performance in 3D multiplayer games with population-based reinforcement learning,” Science, vol. 364, no. 6443, pp. 859–865, 2019.
  • [15] T. Liu, B. Huang, Z. Deng, H. Wang, X. Tang, X. Wang, D. Cao, “Heuristics‐oriented overtaking decision making for autonomous vehicles using reinforcement learning,” IET Electrical Systems in Transportation, vol. 10, no. 4, pp. 417–424, 2020.
  • [16] W. Gao, A. Odekunle, Y. Chen, Z.-P. Jiang, “Predictive cruise control of connected and autonomous vehicles via reinforcement learning,” IET Control Theory Applications, vol. 13, no. 17, pp. 2849–2855, 2019.
  • [17] F. Richter, R. K. Orosco, M. C. Yip, “Open-sourced reinforcement learning environments for surgical robotics,” arXiv [cs.RO], 2019.
  • [18] C. Shin, P. W. Ferguson, S. A. Pedram, J. Ma, E. P. Dutson, J. Rosen, “Autonomous tissue manipulation via surgical robot using learning based model predictive control,” in 2019 International Conference on Robotics and Automation (ICRA), 2019.
  • [19] H. Snyder, “Literature review as a research methodology: An overview and guidelines”. Journal of business research, 104, 333-339, 2019.
  • [20] P. Davies, “The relevance of systematic reviews to educational policy and practice”, Oxford review of education, 26(3-4), 365-378, 2000.
  • [21] F. M. Zennaro, L. Erdodi, “Modeling penetration testing with reinforcement learning using capture-the-flag challenges and tabular Q-learning”, arXiv preprint arXiv:2005.12632, 2020.
  • [22] R. S. Sutton, A. G. Barto, “Reinforcement Learning: An Introduction”, 2nd ed. Cambridge, MA: Bradford Books, 2018.
  • [23] V. Mnih, V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, [24] M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg D. Hassabis., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  • [25] V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau, “An introduction to deep reinforcement learning,” Foundations and Trends® in Machine Learning, vol. 11, no. 3–4, pp. 219–354, 2018.
  • [26] A.Uprety, D. B. Rawat, “Reinforcement learning for IoT security: A comprehensive survey,” IEEE Internet of Things Journal, vol. 8, no. 11, pp. 8693–8706, 2021.
  • [27] S. P. K. Spielberg, R. B. Gopaluni, P. D. Loewen, “Deep reinforcement learning approaches for process control,” in 2017 6th International Symposium on Advanced Control of Industrial Processes (AdCONIP), 2017.
  • [28] H. Mao, M. Alizadeh, I. Menache, S. Kandula, “Resource management with deep reinforcement learning,” in Proceedings of the 15th ACM Workshop on Hot Topics in Networks, 2016.
  • [29] M. Vecerik T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, R. Thomas,T. Rothörl, T. Lampe, M. Riedmiller, “Leveraging demonstrations for deep Reinforcement Learning on robotics problems with sparse rewards,” arXiv [cs.AI], 2017.
  • [30] S. Gu, E. Holly, T. Lillicrap, S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017.
  • [31] M. C. Ghanem, T. M. Chen, “Reinforcement learning for intelligent penetration testing,” in 2018 Second World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), 2018.
  • [32] C. Sarraute, O. Buffet, J. Hoffmann, “POMDPs make better hackers: Accounting for uncertainty in penetration testing,” Twenty-Sixth AAAI Conference on Artificial Intelligence, vol. 26, no. 1, pp. 1816–1824, 2021.
  • [33] C. Sarraute, O. Buffet, J. Hoffmann, “Penetration Testing == POMDP Solving?,” arXiv [cs.AI], 2013.
  • [34] J. Hoffmann, “Simulated penetration testing: From ‘Dijkstra’ to ‘Turing Test++,’” Proceedings of the International Conference on Automated Planning and Scheduling, vol. 25, pp. 364–372, 2015.
  • [35] J. Schwartz, H. Kurniawati, “Autonomous Penetration Testing using Reinforcement Learning,” arXiv [cs.CR], 2019.
  • [36] M. C. Ghanem, T. M. Chen, “Reinforcement learning for efficient network penetration testing,” Information (Basel), vol. 11, no. 1, p. 6, 2019.
  • [37] A.Chowdhary, D. Huang, J. S. Mahendran, D. Romo, Y. Deng, A. Sabur, “Autonomous security analysis and penetration testing,” in 2020 16th International Conference on Mobility, Sensing and Networking (MSN), 2020.
  • [38] H. Nguyen, S. Teerakanok, A. Inomata, T. Uehara, “The proposal of double agent architecture using actor-critic algorithm for penetration testing,” in Proceedings of the 7th International Conference on Information Systems Security and Privacy, 2021.
  • [39] C. Neal, H. Dagdougui, A. Lodi, J. M. Fernandez, “Reinforcement learning based penetration testing of a microgrid control algorithm,” in 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), 2021.
  • [40] Y. Yang, X. Liu, “Behaviour-diverse automatic penetration testing: A curiosity-driven multi-objective deep Reinforcement Learning approach,” arXiv [cs.LG], 2022.
  • [41] X. Xu, T. Xie, “A reinforcement learning approach for host-based intrusion detection using sequences of system calls,” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 995–1003.
  • [42] S. Aljawarneh, M. Aldwairi, M. B. Yassein, “Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model,” Journal of Computational Science, vol. 25, pp. 152–160, 2018.
  • [43] X. Xu, “Sequential anomaly detection based on temporal-difference learning: Principles, models and case studies,” Applied Soft Computing, vol. 10, no. 3, pp. 859–867, 2010.
  • [44] B. Deokar, A. Hazarnis, “Intrusion detection system using log files and reinforcement learning”, International Journal of Computer Applications, vol. 45, no. 19, 28-35, 2012.
  • [45] S. Otoum, B. Kantarci, H. Mouftah, “Empowering reinforcement learning on big sensed data for intrusion detection,” in ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019.
  • [46] G. Caminero, M. Lopez-Martin, B. Carro, “Adversarial environment reinforcement learning algorithm for intrusion detection,” Computer Networks, vol. 159, pp. 96–109, 2019.
  • [47] K. Sethi, E. Sai Rupesh, R. Kumar, P. Bera, Y. Venu Madhav, “A context-aware robust intrusion detection system: a reinforcement learning-based approach,” International Journal of Information Security, vol. 19, no. 6, pp. 657–678, 2020.
  • [48] H. Alavizadeh, H. Alavizadeh, J. Jang-Jaccard, “Deep Q-learning based reinforcement learning approach for network intrusion detection,” Computers, vol. 11, no. 3, p. 41, 2022.
  • A. S. S. Alawsi, S. Kurnaz, “Quality of service system that is self-updating by intrusion detection systems using reinforcement learning,” Applied Nanoscience, 2022.
  • [49] X. Xu, Y. Sun, Huang, Z, “Defending DDoS attacks using hidden Markov models and cooperative reinforcement learning”, In Pacific-Asia Workshop on Intelligence and Security Informatics (pp. 196-207). Springer, Berlin, Heidelberg, 2007.
  • [50] K. Malialis, D. Kudenko, “Multiagent Router Throttling: Decentralized coordinated response against DDoS attacks,” In Twenty-Fifth IAAI Conference, vol. 27, no. 2, pp. 1551–1556, 2013.
  • [51] K. Malialis, D. Kudenko, “Distributed response to network intrusions using multiagent reinforcement learning,” Engineering Applications of Artificial Intelligence, vol. 41, pp. 270–284, 2015.
  • [52] S. Shamshirband, A. Patel, N. B. Anuar, M. L. M. Kiah, A. Abraham, “Cooperative game theoretic approach using fuzzy Q-learning for detecting and preventing intrusions in wireless sensor networks,” Engineering Applications of Artificial Intelligence, vol. 32, pp. 228–241, 2014.
  • [53] K. A. Simpson, S. Rogers, D. P. Pezaros, “Per-host DDoS mitigation by direct-control reinforcement learning,” IEEE Transactions on Network and Service Management, vol. 17, no. 1, pp. 103–117, 2020.
  • [54] Y. Feng, J. Li, T. Nguyen, “Application-layer DDoS defense with reinforcement learning,” in 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS), 2020.
  • [55] K. Grover, A. Lim, Q. Yang, “Jamming and anti-jamming techniques in wireless networks: a survey,” International Journal of Ad Hoc and Ubiquitous Computing, vol. 17, no. 4, p. 197, 2014.
  • [56] Y. Wu, B. Wang, K. J. R. Liu, T. C. Clancy, “Anti-jamming games in multi-channel cognitive radio networks,” IEEE journal on selected areas in communications, vol. 30, no. 1, pp. 4–15, 2012.
  • [57] S. Singh, A. Trivedi, “Anti-jamming in cognitive radio networks using reinforcement learning algorithms,” in 2012 Ninth International Conference on Wireless and Optical Communications Networks (WOCN), 2012.
  • [58] Y. Gwon, S. Dastangoo, C. Fossa, H. T. Kung, “Competing Mobile Network Game: Embracing antijamming and jamming strategies with reinforcement learning,” in 2013 IEEE Conference on Communications and Network Security (CNS), 2013.
  • [59] K. Dabcevic, A. Betancourt, L. Marcenaro, C. S. Regazzoni, “A fictitious play-based game-theoretical approach to alleviating jamming attacks for cognitive radios,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
  • [60] F. Slimeni, B. Scheers, Z. Chtourou, V. Le Nir, “Jamming mitigation in cognitive radio networks using a modified Q-learning algorithm,” in 2015 International Conference on Military Communications and Information Systems (ICMCIS), 2015.
  • [61] F. Slimeni, B. Scheers, Z. Chtourou, V. Le Nir, R. Attia, “Cognitive radio jamming mitigation using Markov decision process and reinforcement learning,” Procedia Computer Science, vol. 73, pp. 199–208, 2015.
  • [62] F. Slimeni, B. Scheers, Z. Chtourou, V. L. Nir, R. Attia, “A modified Q-learning algorithm to solve cognitive radio jamming attack,” International Journal of Embedded Systems, vol. 10, no. 1, p. 41, 2018.
  • [63] B. Wang, Y. Wu, K. J. R. Liu, T. C. Clancy, “An anti-jamming stochastic game for cognitive radio networks,” IEEE journal on selected areas in communications, vol. 29, no. 4, pp. 877–889, 2011.
  • [64] B. F. Lo, I. F. Akyildiz, “Multiagent jamming-resilient control channel game for cognitive radio ad hoc networks,” in 2012 IEEE International Conference on Communications (ICC), 2012.
  • [65] L. Xiao, Y. Li, J. Liu, Y. Zhao, “Power control with reinforcement learning in cooperative cognitive radio networks against jamming,” The Journal of Supercomputing, vol. 71, no. 9, pp. 3237–3257, 2015.
  • [66] G. Han, L. Xiao, H. V. Poor, “Two-dimensional anti-jamming communication based on deep reinforcement learning,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
  • [67] X. Liu, Y. Xu, L. Jia, Q. Wu, A. Anpalagan, “Anti-jamming communications using spectrum waterfall: A deep reinforcement learning approach,” IEEE Communications Letters, vol. 22, no. 5, pp. 998–1001, 2018.
  • [68] S. Machuzak, S. K. Jayaweera, “Reinforcement learning based anti-jamming with wideband autonomous cognitive radios,” in 2016 IEEE/CIC International Conference on Communications in China (ICCC), 2016.
  • [69] M. A. Aref, S. K. Jayaweera, S. Machuzak, “Multi-agent reinforcement learning based cognitive anti-jamming,” in 2017 IEEE Wireless Communications and Networking Conference (WCNC), 2017.
  • [70] F. Yao, L. Jia, “A collaborative multi-agent reinforcement learning anti-jamming algorithm in wireless networks,” IEEE wireless communications letters, vol. 8, no. 4, pp. 1024–1027, 2019.
  • [71] Pourranjbar, G. Kaddoum, A. Ferdowsi, W. Saad, “Reinforcement learning for deceiving reactive jammers in wireless networks,” IEEE Transactions on Communications, vol. 69, no. 6, pp. 3682–3697, 2021.
  • [72] H. Pirayesh, H. Zeng, “Jamming attacks and anti-jamming strategies in wireless networks: A comprehensive survey,” arXiv [cs.CR], 2021.
  • [73] X. Lu, D. Xu, L. Xiao, L. Wang, W. Zhuang, “Anti-jamming communication game for UAV-aided VANETs,” in GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017.
  • [74] L. Xiao, X. Lu, D. Xu, Y. Tang, L. Wang, W. Zhuang, “UAV relay in VANETs against smart jamming with reinforcement learning,” IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4087–4097, 2018.
  • [75] J. Peng, Z. Zhang, Q. Wu, B. Zhang, “Anti-jamming communications in UAV swarms: A reinforcement learning approach,” IEEE Access, vol. 7, pp. 180532–180543, 2019.
  • [76] Z. Li, Y. Lu, X. Li, Z. Wang, W. Qiao, Y. Liu, “UAV networks against multiple maneuvering smart jamming with knowledge-based reinforcement learning,” IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12289–12310, 2021.
  • [77] X. Lu, J. Jie, Z. Lin, L. Xiao, J. Li, Y. Zhang, “Reinforcement learning based energy efficient robot relay for unmanned aerial vehicles against smart jamming,” Science China Information Sciences, vol. 65, no. 1, 2022.
  • [78] L. Xiao, Y. Li, G. Liu, Q. Li, W. Zhuang, “Spoofing detection with reinforcement learning in wireless networks,” in 2015 IEEE Global Communications Conference (GLOBECOM), 2015.
  • [79] L. Xiao, Y. Li, G. Han, G. Liu, W. Zhuang, “PHY-layer spoofing detection with reinforcement learning in wireless networks,” IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 10037–10047, 2016.
  • [80] L. Xiao, C. Xie, T. Chen, H. Dai, H. V. Poor, “A mobile offloading game against smart attacks,” IEEE Access, vol. 4, pp. 2281–2291, 2016.
  • [81] J. Liu, L. Xiao, G. Liu, Y. Zhao, “Active authentication with reinforcement learning based on ambient radio signals,” Multimedia Tools and Applications, vol. 76, no. 3, pp. 3979–3998, 2017.
  • [82] S. Purkait, “Phishing counter measures and their effectiveness – literature review,” Information Management & Computer Security., vol. 20, no. 5, pp. 382–420, 2012.
  • [83] S. Smadi, N. Aslam, L. Zhang, “Detection of online phishing email using dynamic evolving neural network based on reinforcement learning,” Decision Support Systems, vol. 107, pp. 88–102, 2018.
  • [84] Y. Fang, C. Huang, Y. Xu, Y. Li, “RLXSS: Optimizing XSS detection model to defend against adversarial attacks based on reinforcement learning,” Future internet, vol. 11, no. 8, p. 177, 2019.
  • [85] Tariq, M. A. Sindhu, R. A. Abbasi, A. S. Khattak, O. Maqbool, G. F. Siddiqui, “Resolving cross-site scripting attacks through genetic algorithm and reinforcement learning,” Expert Systems with Applications., vol. 168, no. 114386, p. 114386, 2021.
  • [86] F. Caturano, G. Perrone, S. P. Romano, “Discovering reflected cross-site scripting vulnerabilities using a multiobjective reinforcement learning environment,” Computers & Security, vol. 103, no. 102204, p. 102204, 2021.
  • [87] L. Erdodi, Å. Å. Sommervoll, F. M. Zennaro, “Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning agents,” arXiv [cs.CR], 2021.
There are 87 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Research Articles
Authors

Emine Cengiz 0000-0002-6695-9500

Murat Gök 0000-0003-2261-9288

Publication Date April 30, 2023
Submission Date January 17, 2023
Acceptance Date February 12, 2023
Published in Issue Year 2023 Volume: 27 Issue: 2

Cite

APA Cengiz, E., & Gök, M. (2023). Reinforcement Learning Applications in Cyber Security: A Review. Sakarya University Journal of Science, 27(2), 481-503. https://doi.org/10.16984/saufenbilder.1237742
AMA Cengiz E, Gök M. Reinforcement Learning Applications in Cyber Security: A Review. SAUJS. April 2023;27(2):481-503. doi:10.16984/saufenbilder.1237742
Chicago Cengiz, Emine, and Murat Gök. “Reinforcement Learning Applications in Cyber Security: A Review”. Sakarya University Journal of Science 27, no. 2 (April 2023): 481-503. https://doi.org/10.16984/saufenbilder.1237742.
EndNote Cengiz E, Gök M (April 1, 2023) Reinforcement Learning Applications in Cyber Security: A Review. Sakarya University Journal of Science 27 2 481–503.
IEEE E. Cengiz and M. Gök, “Reinforcement Learning Applications in Cyber Security: A Review”, SAUJS, vol. 27, no. 2, pp. 481–503, 2023, doi: 10.16984/saufenbilder.1237742.
ISNAD Cengiz, Emine - Gök, Murat. “Reinforcement Learning Applications in Cyber Security: A Review”. Sakarya University Journal of Science 27/2 (April 2023), 481-503. https://doi.org/10.16984/saufenbilder.1237742.
JAMA Cengiz E, Gök M. Reinforcement Learning Applications in Cyber Security: A Review. SAUJS. 2023;27:481–503.
MLA Cengiz, Emine and Murat Gök. “Reinforcement Learning Applications in Cyber Security: A Review”. Sakarya University Journal of Science, vol. 27, no. 2, 2023, pp. 481-03, doi:10.16984/saufenbilder.1237742.
Vancouver Cengiz E, Gök M. Reinforcement Learning Applications in Cyber Security: A Review. SAUJS. 2023;27(2):481-503.