From 8de1a1ed6a6288495afa29b1139348a4b16332f0 Mon Sep 17 00:00:00 2001 From: Davis King Date: Sun, 10 Sep 2017 22:22:55 -0400 Subject: [PATCH] Improved citations --- examples/face_landmark_detection_ex.cpp | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/examples/face_landmark_detection_ex.cpp b/examples/face_landmark_detection_ex.cpp index 65261a205..efe663514 100644 --- a/examples/face_landmark_detection_ex.cpp +++ b/examples/face_landmark_detection_ex.cpp @@ -8,13 +8,23 @@ - This face detector is made using the classic Histogram of Oriented + The face detector we use is made using the classic Histogram of Oriented Gradients (HOG) feature combined with a linear classifier, an image pyramid, and sliding window detection scheme. The pose estimator was created by using dlib's implementation of the paper: - One Millisecond Face Alignment with an Ensemble of Regression Trees by - Vahid Kazemi and Josephine Sullivan, CVPR 2014 - and was trained on the iBUG 300-W face landmark dataset. + One Millisecond Face Alignment with an Ensemble of Regression Trees by + Vahid Kazemi and Josephine Sullivan, CVPR 2014 + and was trained on the iBUG 300-W face landmark dataset (see + https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/): + C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. + 300 faces In-the-wild challenge: Database and results. + Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016. + You can get the trained model file from: + http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2. + Note that the license for the iBUG 300-W dataset excludes commercial use. + So you should contact Imperial College London to find out if it's OK for + you use use this model file in a commercial product. + Also, note that you can train your own models using dlib's machine learning tools. See train_shape_predictor_ex.cpp to see an example.