Privacy Protection (Faces)

Overview

As organizations collect more data, there is a need to better protect the identities of individuals in public and private places. Our solution performs face anonymization, and can be used to comply with the General Data Protection Regulation (GDPR) or other data privacy laws.

Our solution automatically detects and mosaics (or blurs) human faces. This is explained in the How It Works section.

Demo

To try our solution on your own computer, install and run PeekingDuck with the configuration file privacy_protection_faces.yml as shown:

Terminal Session

[~user] > peekingduck run --config_path <path/to/privacy_protection_faces.yml>

How It Works

There are two main components to face anonymization:

1. Face detection, and

2. Face de-identification.

1. Face Detection

We use an open source face detection model known as MTCNN to identify human faces. This allows the application to identify the locations of human faces in a video feed. Each of these locations is represented as a pair of x, y coordinates in the form $$[x_1, y_1, x_2, y_2]$$, where $$(x_1, y_1)$$ is the top left corner of the bounding box, and $$(x_2, y_2)$$ is the bottom right. These are used to form the bounding box of each human face detected. For more information on how to adjust the MTCNN node, check out the MTCNN configurable parameters.

2. Face De-Identification

To perform face de-identification, we pixelate or gaussian blur the areas bounded by the bounding boxes.

Nodes Used

These are the nodes used in the earlier demo (also in privacy_protection_faces.yml):

nodes:
- input.visual:
source: 0
- model.mtcnn
- draw.mosaic_bbox
- output.screen


1. Face Detection Node

As mentioned, we use the MTCNN model for face detection. It is able to detect human faces with face masks. Please take a look at the benchmarks of object detection models that are included in PeekingDuck if you would like to use a different model or model type better suited to your use case.

2. Face De-Identification Nodes

You can mosaic or blur the faces detected using the draw.mosaic_bbox or draw.blur_bbox in the run config declaration.

3. Adjusting Nodes

With regard to the MTCNN model, some common node behaviors that you might want to adjust are:

• min_size: Specifies in pixels the minimum height and width of a face to be detected. (default = 40) You may want to decrease the minimum size to increase the number of detections.

• network_thresholds: Specifies the threshold values for the Proposal Network (P-Net), Refine Network (R-Net), and Output Network (O-Net) in the MTCNN model. (default = [0.6, 0.7, 0.7]) Calibration is performed at each stage in which bounding boxes with confidence scores less than the specified threshold are discarded.

• score_threshold: Specifies the threshold value in the final output. (default = 0.7) Bounding boxes with confidence scores less than the specified threshold in the final output are discarded. You may want to lower network_thresholds and score_threshold to increase the number of detections.

In addition, some common node behaviors that you might want to adjust for the dabble.mosaic_bbox and dabble.blur_bbox nodes are:

• mosaic_level: Defines the resolution of a mosaic filter ($$width \times height$$); the value corresponds to the number of rows and columns used to create a mosaic. (default = 7) For example, the default value creates a $$7 \times 7$$ mosaic filter. Increasing the number increases the intensity of pixelization over an area.

• blur_level: Defines the standard deviation of the Gaussian kernel used in the Gaussian filter. (default = 50) The higher the blur level, the greater the blur intensity.