Actor Critic Deep Reinforcement Learning for Neural Malware Control
- Yu Wang ,
- Jack W. Stokes ,
- Mady Marinescu
Proceedings of the AAAI Conference on Artificial Intelligence |
In addition to using signatures, antimalware products also detect malicious attacks by evaluating unknown files in an emulated environment, i.e. sandbox, prior to execution on a computer’s native operating system. During emulation, a file cannot be scanned indefinitely, and antimalware engines often set the number of instructions to be executed based on a set of heuristics. These heuristics only make the decision of when to halt emulation using partial information leading to the execution of the file for either too many or too few instructions. Also this method is vulnerable if the attackers learn this set of heuristics.
Recent research uses a deep reinforcement learning (DRL) model employing a Deep Q-Network (DQN) to learn when to halt the emulation of a file. In this paper, we propose a new DRL-based system which instead employs a modified actor critic (AC) framework for the emulation halting task. This AC model dynamically predicts the best time to halt the file’s execution based on a sequence of system API calls. Compared to the earlier models, the new model is capable of handling adversarial attacks by simulating their behaviors using the critic model. The new AC model demonstrates much better performance than both the DQN model and antimalware engine’s heuristics. In terms of execution speed (evaluated by the halting decision), the new model halts the execution of unknown files by up to 2.5% earlier than the DQN model and 93.6% earlier than the heuristics. For the task of detecting malicious files, the proposed AC model increases the true positive rate by 9.9% from 69.5% to 76.4% at a false positive rate of 1% compared to the DQN model, and by 83.4% from 41.2% to 76.4% at a false positive rate of 1% compared to a recently proposed LSTM model.