Piotr Parkitny

Logo

Explore my data-driven projects and analyses that cover everything from machine learning to data visualization.

View the Project on GitHub pparkitn/pparkitn

Face Emotion Detection

Piotr Parkitny | pparkitny@ischool.berkeley.edu

Table of Contents

Problem Statement

Detecting emotion from an image has many applications, Some examples of automatically analyzing video to determine how people reacted to certain events:

A full presentation is available here –> Presentation

Dataset

The dataset is composed of human faces labeled according to the displayed emotion.

Logo

Project Solution

The diagram below describes the overall solution.
The solution is broken down into Cloud and Edge components.

Logo

Cloud Components

Cloud is composed of Amazon AWS and Google GCP that are used for training, storing results, and displaying real-time results from the Edge device

  1. Training
    The model that is used on Amazon AWS for training on the dataset –> Model
  2. BigQuery
    The Google GCP-hosted database that stores real-time input data from the Edge device.
  3. DataStudio
    Datastudio dashboard hosted on Google GCP for displaying data from BigQuery

Edge Device

Jetson is used as the edge device for running the model

  1. MQTT Broker –> edge-MQTT-broker. The Edge MQTT broker stores the detected facial emotion.
  2. MQTT Forwarder –> edge-MQTT-forwarder. The Edge MQTT Forwarder subscribes to the local MQTT Broker and publishes to the cloud DB.
  3. Face Detector –> edge-emotion-detector. The Edge Face Detector detects the facial emotion and publishes it to the local MQTT Broker.

Cloud Training


Training is done on Amazon AWS EC2 using g4dn.xlarge instance. Below are the steps required to set up training

ssh -i us-east-1-jetson.pem -L 7777:127.0.0.1:7777 ubuntu@ec2-34-238-51-68.compute-1.amazonaws.com 

Run it on Kaggle

Start Docker and JupyterLab

docker run --privileged --shm-size=1g --ulimit memlock=-1 --ipc=host --net=host --gpus all -it -v $(pwd):/workspace nvcr.io/nvidia/pytorch:21.06-py3
jupyter-lab --ip=0.0.0.0 --port 7777 --allow-root --no-browser --NotebookApp.token=''

Weights & Biases

Weights & Biases is used for tracking the progress of the training along with keeping all the results saved.
Below are an example of the generated graphs.

Edge Inference


Running on the edge is done on the Jetson, the following code will start the process.

Start Broker

cd edge-MQTT-broker
./start.sh

Start Forwarder

cd edge-MQTT-forwarder
./start.sh

Start Face Emotion Detector

sudo xrandr --fb 1900x1000
xhost +
cd edge-face-detect
start.sh
python3 emotion_detect.py

Below are some images that are based on running on the Jetson. No noticeable lag can be observed and the overall performance is very good.

Dashboard


The dashboard pulls the data from bigquery that is used by MQTT Fowarder to store emotions along with a timestamp.

Results