updated docs

This commit is contained in:
Davis King 2014-08-23 21:24:01 -04:00
parent 954612b67c
commit 946601ec2e
4 changed files with 119 additions and 9 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@ -120,6 +120,7 @@
<section>
<name>Feature Extraction</name>
<item>get_surf_points</item>
<item>shape_predictor</item>
<item nolink="true">
<name>SURF Tools</name>
<sub>
@ -207,23 +208,23 @@
</section>
<section>
<name>Colormaps</name>
<name>Visualization</name>
<item>randomly_color_image</item>
<item>heatmap</item>
<item>jet</item>
</section>
<section>
<name>Miscellaneous</name>
<item>cv_image</item>
<item>toMat</item>
<item>render_face_detections</item>
<item>draw_line</item>
<item>draw_surf_points</item>
<item>draw_rectangle</item>
<item>tile_images</item>
<item>draw_fhog</item>
<item>fill_rect</item>
</section>
<section>
<name>Miscellaneous</name>
<item>cv_image</item>
<item>toMat</item>
<item>assign_image</item>
<item>assign_image_scaled</item>
<item>assign_all_pixels</item>
@ -362,6 +363,41 @@
</component>
<!-- ************************************************************************* -->
<component>
<name>shape_predictor</name>
<file>dlib/image_processing.h</file>
<spec_file link="true">dlib/image_processing/shape_predictor_abstract.h</spec_file>
<description>
This object is a tool that takes in an image region containing some object
and outputs a "shape" or set of point locations that define the pose of the
object. The classic example of this is human face pose prediction, where
you take an image of a human face as input and are expected to identify the
locations of important facial landmarks such as the corners of the mouth
and eyes, tip of the nose, and so forth. For example, here is the output
of dlib's <a href="http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2">68-face-landmark shape_predictor</a> on an image from the HELEN dataset: <br/><br/>
<img src='face_landmarking_example.png'/>
<br/><br/>
To create useful instantiations of this object you need to use the
<a href="ml.html#shape_predictor_trainer">shape_predictor_trainer</a> object to train a
shape_predictor using a set of training images, each annotated with shapes you want to predict.
To do this, the shape_predictor_trainer uses the state-of-the-art method from the
paper:
<blockquote>
One Millisecond Face Alignment with an Ensemble of Regression Trees
by Vahid Kazemi and Josephine Sullivan, CVPR 2014
</blockquote>
</description>
<examples>
<example>face_landmark_detection_ex.cpp.html</example>
<example>train_shape_predictor_ex.cpp.html</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>
@ -1020,6 +1056,31 @@
</component>
<!-- ************************************************************************* -->
<component>
<name>render_face_detections</name>
<file>dlib/image_processing/render_face_detections.h</file>
<spec_file link="true">dlib/image_processing/render_face_detections_abstract.h</spec_file>
<description>
This function takes a set of <a href="#full_object_detection">full_object_detections</a>
which represent human faces annotated with 68 facial landmarks (according to the iBUG 300-W
scheme) and converts them into a form suitable for display on an
<a href="dlib/gui_widgets/widgets_abstract.h.html#image_window">image_window</a>.
<p>
For example, it will take the output of a <a href="#shape_predictor">shape_predictor</a>
that uses this facial landmarking scheme and will produce visualizations like this:
</p>
<img src='face_landmarking_example.png'/>
</description>
<examples>
<example>face_landmark_detection_ex.cpp.html</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>

View File

@ -104,6 +104,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<item>structural_track_association_trainer</item>
<item>structural_graph_labeling_trainer</item>
<item>svm_rank_trainer</item>
<item>shape_predictor_trainer</item>
</section>
<section>
<name>Clustering</name>
@ -163,6 +164,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<item>test_track_association_function</item>
<item>test_graph_labeling_function</item>
<item>test_ranking_function</item>
<item>test_shape_predictor</item>
<item>average_precision</item>
</section>
@ -1315,6 +1317,30 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
</component>
<!-- ************************************************************************* -->
<component>
<name>shape_predictor_trainer</name>
<file>dlib/image_processing.h</file>
<spec_file link="true">dlib/image_processing/shape_predictor_abstract.h</spec_file>
<description>
This object is a tool for training <a href="imaging.html#shape_predictor">shape_predictors</a>
based on annotated training images. Its implementation uses the algorithm described in:
<blockquote>
One Millisecond Face Alignment with an Ensemble of Regression Trees
by Vahid Kazemi and Josephine Sullivan, CVPR 2014
</blockquote>
It is capable of learning high quality shape models. For example, this is an example output
for one of the faces in the HELEN face dataset: <br/><br/>
<img src='face_landmarking_example.png'/>
</description>
<examples>
<example>train_shape_predictor_ex.cpp.html</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>
@ -2684,6 +2710,25 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
</component>
<!-- ************************************************************************* -->
<component>
<name>test_shape_predictor</name>
<file>dlib/image_processing.h</file>
<spec_file link="true">dlib/image_processing/shape_predictor_abstract.h</spec_file>
<description>
Tests a <a href="imaging.html#shape_predictor">shape_predictor</a>'s ability to correctly
predict the part locations of objects. The output is the average distance (measured in pixels) between
each part and its true location. You can optionally normalize each distance using a
user supplied scale. For example, when performing face landmarking, you might want to
normalize the distances by the interocular distance.
</description>
<examples>
<example>train_shape_predictor_ex.cpp.html</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>

View File

@ -343,6 +343,10 @@
<term file="ml.html" name="svm_c_linear_trainer" include="dlib/svm.h"/>
<term file="ml.html" name="svm_c_linear_dcd_trainer" include="dlib/svm.h"/>
<term file="ml.html" name="svm_rank_trainer" include="dlib/svm.h"/>
<term file="ml.html" name="shape_predictor_trainer" include="dlib/image_processing.h"/>
<term file="ml.html" name="test_shape_predictor" include="dlib/image_processing.h"/>
<term file="imaging.html" name="shape_predictor" include="dlib/image_processing.h"/>
<term file="imaging.html" name="render_face_detections" include="dlib/image_processing/render_face_detections.h"/>
<term file="ml.html" name="ranking_pair" include="dlib/svm.h"/>
<term file="ml.html" name="is_ranking_problem" include="dlib/svm.h"/>
<term file="ml.html" name="count_ranking_inversions" include="dlib/svm.h"/>