From fda756543006b485079b3477e854ffa94771953c Mon Sep 17 00:00:00 2001 From: Davis King Date: Fri, 28 May 2010 13:34:34 +0000 Subject: [PATCH] Made spec more clear --HG-- extra : convert_revision : svn%3Afdd8eb12-d10e-0410-9acb-85c331704f74/trunk%403645 --- .../linear_manifold_regularizer_abstract.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/dlib/manifold_regularization/linear_manifold_regularizer_abstract.h b/dlib/manifold_regularization/linear_manifold_regularizer_abstract.h index c6fc4bb66..96ee34af3 100644 --- a/dlib/manifold_regularization/linear_manifold_regularizer_abstract.h +++ b/dlib/manifold_regularization/linear_manifold_regularizer_abstract.h @@ -26,9 +26,9 @@ namespace dlib - dimensionality() == 0 WHAT THIS OBJECT REPRESENTS - Many learning algorithms attempt to minimize a loss function that, - at a high level, looks like this: - loss(w) == complexity + training_set_error + Many learning algorithms attempt to minimize a function that, at a high + level, looks like this: + f(w) == complexity + training_set_error The idea is to find the set of parameters, w, that gives low error on your training data but also is not "complex" according to some particular @@ -40,12 +40,12 @@ namespace dlib The idea of manifold regularization is to extract useful information from unlabeled data by first defining which data samples are "close" to each other (perhaps by using their 3 nearest neighbors) and then adding a term to - the loss function that penalizes any decision rule which produces + the above function that penalizes any decision rule which produces different outputs on data samples which we have designated as being close. - It turns out that it is possible to transform these manifold regularized loss - functions into the normal form shown above by applying a certain kind of - preprocessing to all our data samples. Once this is done we can use a + It turns out that it is possible to transform these manifold regularized + learning problems into the normal form shown above by applying a certain kind + of preprocessing to all our data samples. Once this is done we can use a normal learning algorithm, such as the svm_c_linear_trainer, on just the labeled data samples and obtain the same output as the manifold regularized learner would have produced.