clarified docs

This commit is contained in:
Davis King 2018-07-03 09:41:22 -04:00
parent 0f169ed71f
commit 93b83677a7
1 changed files with 63 additions and 68 deletions

View File

@ -245,6 +245,57 @@ function_evaluation py_function_evaluation(
void bind_global_optimization(py::module& m)
{
const char* docstring =
"requires \n\
- len(bound1) == len(bound2) == len(is_integer_variable) \n\
- for all valid i: bound1[i] != bound2[i] \n\
- solver_epsilon >= 0 \n\
- f() is a real valued multi-variate function. It must take scalar real \n\
numbers as its arguments and the number of arguments must be len(bound1). \n\
ensures \n\
- This function performs global optimization on the given f() function. \n\
The goal is to maximize the following objective function: \n\
f(x) \n\
subject to the constraints: \n\
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i]) \n\
if (is_integer_variable[i]) then x[i] is an integer value (but still \n\
represented with float type). \n\
- find_max_global() runs until it has called f() num_function_calls times. \n\
Then it returns the best x it has found along with the corresponding output \n\
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen \n\
is a list containing the best arguments to f() this function has found. \n\
- find_max_global() uses a global optimization method based on a combination of \n\
non-parametric global function modeling and quadratic trust region modeling \n\
to efficiently find a global maximizer. It usually does a good job with a \n\
relatively small number of calls to f(). For more information on how it \n\
works read the documentation for dlib's global_function_search object. \n\
However, one notable element is the solver epsilon, which you can adjust. \n\
\n\
The search procedure will only attempt to find a global maximizer to at most \n\
solver_epsilon accuracy. Once a local maximizer is found to that accuracy \n\
the search will focus entirely on finding other maxima elsewhere rather than \n\
on further improving the current local optima found so far. That is, once a \n\
local maxima is identified to about solver_epsilon accuracy, the algorithm \n\
will spend all its time exploring the function to find other local maxima to \n\
investigate. An epsilon of 0 means it will keep solving until it reaches \n\
full floating point precision. Larger values will cause it to switch to pure \n\
global exploration sooner and therefore might be more effective if your \n\
objective function has many local maxima and you don't care about a super \n\
high precision solution. \n\
- Any variables that satisfy the following conditions are optimized on a log-scale: \n\
- The lower bound on the variable is > 0 \n\
- The ratio of the upper bound to lower bound is > 1000 \n\
- The variable is not an integer variable \n\
We do this because it's common to optimize machine learning models that have \n\
parameters with bounds in a range such as [1e-5 to 1e10] (e.g. the SVM C \n\
parameter) and it's much more appropriate to optimize these kinds of \n\
variables on a log scale. So we transform them by applying log() to \n\
them and then undo the transform via exp() before invoking the function \n\
being optimized. Therefore, this transformation is invisible to the user \n\
supplied functions. In most cases, it improves the efficiency of the \n\
optimizer.";
/*!
requires
- len(bound1) == len(bound2) == len(is_integer_variable)
@ -258,7 +309,8 @@ void bind_global_optimization(py::module& m)
f(x)
subject to the constraints:
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i])
if (is_integer_variable[i]) then x[i] is an integer.
if (is_integer_variable[i]) then x[i] is an integer value (but still
represented with float type).
- find_max_global() runs until it has called f() num_function_calls times.
Then it returns the best x it has found along with the corresponding output
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen
@ -294,83 +346,26 @@ void bind_global_optimization(py::module& m)
supplied functions. In most cases, it improves the efficiency of the
optimizer.
!*/
{
m.def("find_max_global", &py_find_max_global,
"requires \n\
- len(bound1) == len(bound2) == len(is_integer_variable) \n\
- for all valid i: bound1[i] != bound2[i] \n\
- solver_epsilon >= 0 \n\
- f() is a real valued multi-variate function. It must take scalar real \n\
numbers as its arguments and the number of arguments must be len(bound1). \n\
ensures \n\
- This function performs global optimization on the given f() function. \n\
The goal is to maximize the following objective function: \n\
f(x) \n\
subject to the constraints: \n\
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i]) \n\
if (is_integer_variable[i]) then x[i] is an integer. \n\
- find_max_global() runs until it has called f() num_function_calls times. \n\
Then it returns the best x it has found along with the corresponding output \n\
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen \n\
is a list containing the best arguments to f() this function has found. \n\
- find_max_global() uses a global optimization method based on a combination of \n\
non-parametric global function modeling and quadratic trust region modeling \n\
to efficiently find a global maximizer. It usually does a good job with a \n\
relatively small number of calls to f(). For more information on how it \n\
works read the documentation for dlib's global_function_search object. \n\
However, one notable element is the solver epsilon, which you can adjust. \n\
\n\
The search procedure will only attempt to find a global maximizer to at most \n\
solver_epsilon accuracy. Once a local maximizer is found to that accuracy \n\
the search will focus entirely on finding other maxima elsewhere rather than \n\
on further improving the current local optima found so far. That is, once a \n\
local maxima is identified to about solver_epsilon accuracy, the algorithm \n\
will spend all its time exploring the function to find other local maxima to \n\
investigate. An epsilon of 0 means it will keep solving until it reaches \n\
full floating point precision. Larger values will cause it to switch to pure \n\
global exploration sooner and therefore might be more effective if your \n\
objective function has many local maxima and you don't care about a super \n\
high precision solution. \n\
- Any variables that satisfy the following conditions are optimized on a log-scale: \n\
- The lower bound on the variable is > 0 \n\
- The ratio of the upper bound to lower bound is > 1000 \n\
- The variable is not an integer variable \n\
We do this because it's common to optimize machine learning models that have \n\
parameters with bounds in a range such as [1e-5 to 1e10] (e.g. the SVM C \n\
parameter) and it's much more appropriate to optimize these kinds of \n\
variables on a log scale. So we transform them by applying log() to \n\
them and then undo the transform via exp() before invoking the function \n\
being optimized. Therefore, this transformation is invisible to the user \n\
supplied functions. In most cases, it improves the efficiency of the \n\
optimizer."
,
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("is_integer_variable"), py::arg("num_function_calls"), py::arg("solver_epsilon")=0
);
}
m.def("find_max_global", &py_find_max_global, docstring, py::arg("f"),
py::arg("bound1"), py::arg("bound2"), py::arg("is_integer_variable"),
py::arg("num_function_calls"), py::arg("solver_epsilon")=0);
{
m.def("find_max_global", &py_find_max_global2,
"This function simply calls the other version of find_max_global() with is_integer_variable set to False for all variables.",
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("num_function_calls"), py::arg("solver_epsilon")=0
);
}
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("num_function_calls"),
py::arg("solver_epsilon")=0);
{
m.def("find_min_global", &py_find_min_global,
"This function is just like find_max_global(), except it performs minimization rather than maximization."
,
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("is_integer_variable"), py::arg("num_function_calls"), py::arg("solver_epsilon")=0
);
}
"This function is just like find_max_global(), except it performs minimization rather than maximization.",
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("is_integer_variable"),
py::arg("num_function_calls"), py::arg("solver_epsilon")=0);
{
m.def("find_min_global", &py_find_min_global2,
"This function simply calls the other version of find_min_global() with is_integer_variable set to False for all variables.",
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("num_function_calls"), py::arg("solver_epsilon")=0
);
}
py::arg("f"), py::arg("bound1"), py::arg("bound2"), py::arg("num_function_calls"),
py::arg("solver_epsilon")=0);
// -------------------------------------------------
// -------------------------------------------------