• KSII Transactions on Internet and Information Systems
    Monthly Online Journal (eISSN: 1976-7277)

Visual Analysis of Deep Q-network

Vol. 15, No. 3, March 31, 2021
10.3837/tiis.2021.03.003, Download Paper (Free):

Abstract

In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.


Statistics

Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2015)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article

[IEEE Style]
D. Seng, J. Zhang and X. Shi, "Visual Analysis of Deep Q-network," KSII Transactions on Internet and Information Systems, vol. 15, no. 3, pp. 853-873, 2021. DOI: 10.3837/tiis.2021.03.003.

[ACM Style]
Dewen Seng, Jiaming Zhang, and Xiaoying Shi. 2021. Visual Analysis of Deep Q-network. KSII Transactions on Internet and Information Systems, 15, 3, (2021), 853-873. DOI: 10.3837/tiis.2021.03.003.