Photos of faces uploaded online are vulnerable to malicious actors who can scrape facial images from online sources and intrude on personal privacy via unauthorized use of deep neural network (DNN) facial recognition models. This paper presents FaceCloak, a novel personalized face privacy protection system, which can generate defensive identity-specific universal face privacy masks from a single image of a user, causing facial recognition to fail. FaceCloak introduces a three-stage personalized face perturbation learning methodology: (1) It generates a small set of high-variety synthetic face images of a person based on a single image of the person. (2) It develops novel methods to learn face cloaking by adding more protection to key facial-identity leakage regions through iterative perturbation generation over the small set of synthetic images, effectively shifting a user's identity embedding towards a distant anchor identity and away from a similar one. (3) It generates a personalized identity-protective mask in the form of pixel-wise cloaking, which is light-weight and can be efficiently applied to any facial image of a user while maintaining good human-perceived quality. Extensive experiments on three popular face datasets across ten recognition models show the effectiveness of FaceCloak compared to 29 other existing representative methods.
We use Python version 3.10.16 for all of our experiments.
- Clone our repository.
- Create a virtual environment using Conda or venv.
- Install all dependencies from
requirements.txt. - Download datasets and models from links below.
Our splits of probe and gallery datasets are available at the links below:
Datasets: [Google Drive Link]
Put the data folder at the top level directory.
Models: [Google Drive Link]
Put the model_checkpoints folder at the top level directory.
We provide a handler script in that allows for batch submission of Slurm jobs to run face identification experiments with different parameters. This file contains a description of each config variable. Once run, it creates a script for submitting a Slurm job for each combination of hyperparameters enumerated in the file. You can run this script with:
python src/run_cloak_experiments.py
If you wish to run a single job you may configure a single set of hyperparameters in this file, or run a custom job via command line arguments with:
python src/cloak.py
The outputs of both of these runs will be stored in outputs/{run_name} where run_name is determined by the job running file based on the chosen hyperparameters. Cloaked images and synthetic images will be stored in data/cloaked/{run_name}.
Thank you to the following repositories for providing open source code that assisted with the development of this project:
This work was funded by the Georgia Tech Research Institute PhD Fellowship.
