Explore my data-driven projects and analyses that cover everything from machine learning to data visualization.
Face Emotion Detection
Piotr Parkitny | pparkitny@ischool.berkeley.edu
Detecting emotion from an image has many applications, Some examples of automatically analyzing video to determine how people reacted to certain events:
A full presentation is available here –> Presentation
The dataset is composed of human faces labeled according to the displayed emotion.
The diagram below describes the overall solution.
The solution is broken down into Cloud and Edge components.
Cloud is composed of Amazon AWS and Google GCP that are used for training, storing results, and displaying real-time results from the Edge device
Jetson is used as the edge device for running the model
Training is done on Amazon AWS EC2 using g4dn.xlarge instance. Below are the steps required to set up training
ssh -i us-east-1-jetson.pem -L 7777:127.0.0.1:7777 ubuntu@ec2-34-238-51-68.compute-1.amazonaws.com
Run it on Kaggle
docker run --privileged --shm-size=1g --ulimit memlock=-1 --ipc=host --net=host --gpus all -it -v $(pwd):/workspace nvcr.io/nvidia/pytorch:21.06-py3
jupyter-lab --ip=0.0.0.0 --port 7777 --allow-root --no-browser --NotebookApp.token=''
Weights & Biases is used for tracking the progress of the training along with keeping all the results saved.
Below are an example of the generated graphs.
Running on the edge is done on the Jetson, the following code will start the process.
Start Broker
cd edge-MQTT-broker
./start.sh
Start Forwarder
cd edge-MQTT-forwarder
./start.sh
Start Face Emotion Detector
sudo xrandr --fb 1900x1000
xhost +
cd edge-face-detect
start.sh
python3 emotion_detect.py
Below are some images that are based on running on the Jetson. No noticeable lag can be observed and the overall performance is very good.
The dashboard pulls the data from bigquery that is used by MQTT Fowarder to store emotions along with a timestamp.