Add sphere demo docs.

This commit is contained in:
Brandon Amos 2016-09-15 08:43:39 -04:00
parent 8bc8227937
commit bb64d6917e
6 changed files with 101 additions and 4 deletions

View File

@ -1,4 +1,8 @@
# Demo 1: Real-Time Web Demo
## Demo 1: Real-Time Web Demo
Released by [Brandon Amos](http://bamos.github.io) on 2015-10-13.
---
See [our YouTube video](https://www.youtube.com/watch?v=LZJOTRkjZA4)
of using this in a real-time web application
for face recognition.

View File

@ -1,4 +1,8 @@
# Demo 2: Comparing two images
## Demo 2: Comparing two images
Released by [Brandon Amos](http://bamos.github.io) on 2015-10-13.
---
The [comparison demo](https://github.com/cmusatyalab/openface/blob/master/demos/compare.py) outputs the predicted similarity
score of two faces by computing the squared L2 distance between
their representations.

View File

@ -1,4 +1,9 @@
# Demo 3: Training a Classifier
## Demo 3: Training a Classifier
Released by [Brandon Amos](http://bamos.github.io) on 2015-10-13.
---
OpenFace's core provides a feature extraction method to
obtain a low-dimensional representation of any face.
[demos/classifier.py](https://github.com/cmusatyalab/openface/blob/master/demos/classifier.py)

79
docs/demo-4-sphere.md Normal file
View File

@ -0,0 +1,79 @@
## Demo 4: Real-Time Face Embedding Visualization
Released by [Brandon Amos](http://bamos.github.io) and
[Gabriel Farina](https://github.com/gabrfarina) on 2016-09-12.
---
![](https://raw.githubusercontent.com/cmusatyalab/openface/master/images/sphere-demo/demo.gif)
<center>
![](https://raw.githubusercontent.com/cmusatyalab/openface/master/images/sphere-demo/exhibit-amos.png)
</center>
We had a great opportunity
(*thanks to Jan Harkes, Alison Langmead, and Aaron Henderson*)
to present a short OpenFace demo
in the [Data (after)Lives art exhibit](https://uag.pitt.edu/Detail/occurrences/370)
at the University of Pittsburgh, which is live from Sept 8, 2016 to Oct 14, 2016
and investigates the relationship between the human notions of self and
technical alternative, externalized, and malleable representations of identity.
We have released the source code behind this demo in our main
GitHub repository in
[demos/sphere.py](https://github.com/cmusatyalab/openface/blob/master/demos/sphere.py).
This exhibit also features [two other art pieces](https://raw.githubusercontent.com/cmusatyalab/openface/master/images/sphere-demo/exhibits-nosenzo.png)
by [Sam Nosenzo](http://www.pitt.edu/~san76/),
[Alison Langmead](http://www.haa.pitt.edu/person/alison-langmead/),
and [Aaron Henderson](http://www.aaronhenderson.com/) that use OpenFace.
### How this is implemented
This is a short description of our implementation in
[demos/sphere.py](https://github.com/cmusatyalab/openface/blob/master/demos/sphere.py),
which is only ~300 lines of code.
For a brief intro to OpenFace, we provide face recognition with
a deep neural network that embed faces on a sphere.
(See [our tech report](http://reports-archive.adm.cs.cmu.edu/anon/2016/CMU-CS-16-118.pdf)
for a more detailed intro to how OpenFace works.)
Faces are often embedded onto a 128-dimensional sphere.
For this demo, we re-trained a neural network to embed faces onto a
3-dimensional sphere that we show in real-time on top of a webcam feed.
The 3-dimensional embedding doesn't have the same accuracy as the
128-dimensional embedding, but it's sufficient to illustrate how
the embedding space distinguishes between different people.
In this demo:
+ We first use [OpenCV](http://opencv.org/) to get, process, and display
a video feed from the camera.
+ The detected faces and embeddings for every face can be easily obtained with
[dlib](http://blog.dlib.net/) and OpenFace with
[a few lines of code](http://cmusatyalab.github.io/openface/usage/).
+ The color of the embedding is created by mapping the location of the
face in the frame to be a number between 0 and 1 and then using
a [matplotlib colormap](http://matplotlib.org/examples/color/colormaps_reference.html).
+ To keep all of the graphics on a single panel, we draw the sphere on
top of the same OpenCV buffer as the video.
[OpenCV only has 2D drawing primitives](http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html),
so we [isometrically project](https://en.wikipedia.org/wiki/Isometric_projection)
the points from the 3D sphere into 2D so we can use OpenCV's 2D drawing primitives.
+ Since the images from the video are noisy, the embeddings will jump around
a lot of the sphere if not dampened.
We smooth this out with
[dlib's object tracker](http://blog.dlib.net/2015/02/dlib-1813-released.html)
to track of a face's average (dampened) embedding throughout
the video frames.
+ Face detection and recognition cause the 'low' frame rate.
The frame rate could be improved by only doing detection and recognition
every few frames and using face tracking (which is fast) in between to
update the face locations.
### Running on your computer
To run this on your computer:
1. [Set up OpenFace](http://cmusatyalab.github.io/openface/setup/).
2. Download the 3D model from
[here](http://openface-models.storage.cmusatyalab.org/nn4.small2.3d.v1.t7).
3. Run [demos/sphere.py](https://github.com/cmusatyalab/openface/blob/master/demos/sphere.py)
with the `--networkModel` argument pointing to the 3D model.

View File

@ -9,6 +9,10 @@ deep neural networks.*
## News
+ 2016-09-15: We presented OpenFace in the
[Data (after)Lives](https://uag.pitt.edu/Detail/occurrences/370) art exhibit
at the University of Pittsburgh and have released the code as
[Demo 4: Real-time Face Embedding Visualization](demo-4-sphere/).
+ 2016-08-09: [New blog post: (Face) Image Completion with Deep Learning in TensorFlow](http://bamos.github.io/2016/08/09/deep-completion/). ([OpenFace group discussion on it](https://groups.google.com/forum/#!topic/cmu-openface/h7t-URw7zJA))
+ 2016-06-01: [OpenFace tech report released](http://reports-archive.adm.cs.cmu.edu/anon/2016/CMU-CS-16-118.pdf)
+ 2016-01-19: OpenFace 0.2.0 released!

View File

@ -14,6 +14,7 @@ pages:
- Demo 1 - Real-time Web: demo-1-web.md
- Demo 2 - Comparison: demo-2-comparison.md
- Demo 3 - Training a Classifier: demo-3-classifier.md
- Demo 4 - Real-time Sphere Visualization: demo-4-sphere.md
- User Guide:
- Usage and API Docs: usage.md
- Setup: setup.md
@ -22,4 +23,4 @@ pages:
- Models and Accuracies: models-and-accuracies.md
- Training a DNN Model: training-new-models.md
- Visualizations: visualizations.md
- Release Notes: release-notes.md
- Release Notes: release-notes.md