• KSII Transactions on Internet and Information Systems
    Monthly Online Journal (eISSN: 1976-7277)

Model Inversion Attack: Analysis under Gray-box Scenario on Deep Learning based Face Recognition System

Vol. 15, No. 3, March 31, 2021
10.3837/tiis.2021.03.015, Download Paper (Free):

Abstract

In a wide range of ML applications, the training data contains privacy-sensitive information that should be kept secure. Training the ML systems by privacy-sensitive data makes the ML model inherent to the data. As the structure of the model has been fine-tuned by training data, the model can be abused for accessing the data by the estimation in a reverse process called model inversion attack (MIA). Although, MIA has been applied to shallow neural network models of recognizers in literature and its threat in privacy violation has been approved, in the case of a deep learning (DL) model, its efficiency was under question. It was due to the complexity of a DL model structure, big number of DL model parameters, the huge size of training data, big number of registered users to a DL model and thereof big number of class labels. This research work first analyses the possibility of MIA on a deep learning model of a recognition system, namely a face recognizer. Second, despite the conventional MIA under the white box scenario of having partial access to the users' non-sensitive information in addition to the model structure, the MIA is implemented on a deep face recognition system by just having the model structure and parameters but not any user information. In this aspect, it is under a semi-white box scenario or in other words a gray-box scenario. The experimental results in targeting five registered users of a CNN-based face recognition system approve the possibility of regeneration of users' face images even for a deep model by MIA under a gray box scenario. Although, for some images the evaluation recognition score is low and the generated images are not easily recognizable, but for some other images the score is high and facial features of the targeted identities are observable. The objective and subjective evaluations demonstrate that privacy cyber-attack by MIA on a deep recognition system not only is feasible but also is a serious threat with increasing alert state in the future as there is considerable potential for integration more advanced ML techniques to MIA.


Statistics

Show / Hide Statistics

Statistics (Cumulative Counts from December 1st, 2015)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article

[IEEE Style]
M. Khosravy, K. Nakamura, Y. Hirose, N. Nitta, N. Babaguchi, "Model Inversion Attack: Analysis under Gray-box Scenario on Deep Learning based Face Recognition System," KSII Transactions on Internet and Information Systems, vol. 15, no. 3, pp. 1100-1118, 2021. DOI: 10.3837/tiis.2021.03.015.

[ACM Style]
Mahdi Khosravy, Kazuaki Nakamura, Yuki Hirose, Naoko Nitta, and Noboru Babaguchi. 2021. Model Inversion Attack: Analysis under Gray-box Scenario on Deep Learning based Face Recognition System. KSII Transactions on Internet and Information Systems, 15, 3, (2021), 1100-1118. DOI: 10.3837/tiis.2021.03.015.

[BibTeX Style]
@article{tiis:24366, title="Model Inversion Attack: Analysis under Gray-box Scenario on Deep Learning based Face Recognition System", author="Mahdi Khosravy and Kazuaki Nakamura and Yuki Hirose and Naoko Nitta and Noboru Babaguchi and ", journal="KSII Transactions on Internet and Information Systems", DOI={10.3837/tiis.2021.03.015}, volume={15}, number={3}, year="2021", month={March}, pages={1100-1118}}