• KSII Transactions on Internet and Information Systems
    Monthly Online Journal (eISSN: 1976-7277)

Binary Hashing CNN Features for Action Recognition

Vol. 12, No. 9, September 29, 2018
10.3837/tiis.2018.09.016, Download Paper (Free):


The purpose of this work is to solve the problem of representing an entire video using Convolutional Neural Network (CNN) features for human action recognition. Recently, due to insufficient GPU memory, it has been difficult to take the whole video as the input of the CNN for end-to-end learning. A typical method is to use sampled video frames as inputs and corresponding labels as supervision. One major issue of this popular approach is that the local samples may not contain the information indicated by the global labels and sufficient motion information. To address this issue, we propose a binary hashing method to enhance the local feature extractors. First, we extract the local features and aggregate them into global features using maximum/minimum pooling. Second, we use the binary hashing method to capture the motion features. Finally, we concatenate the hashing features with global features using different normalization methods to train the classifier. Experimental results on the JHMDB and MPII-Cooking datasets show that, for these new local features, binary hashing mapping on the sparsely sampled features led to significant performance improvements.


Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2015)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.

Cite this article

[IEEE Style]
W. Li, C. Feng, B. Xiao and Y. Chen, "Binary Hashing CNN Features for Action Recognition," KSII Transactions on Internet and Information Systems, vol. 12, no. 9, pp. 4412-4428, 2018. DOI: 10.3837/tiis.2018.09.016.

[ACM Style]
Weisheng Li, Chen Feng, Bin Xiao, and Yanquan Chen. 2018. Binary Hashing CNN Features for Action Recognition. KSII Transactions on Internet and Information Systems, 12, 9, (2018), 4412-4428. DOI: 10.3837/tiis.2018.09.016.