Commit Graph

6946 Commits

Author SHA1 Message Date
Davis King 3a0a63da39 A little bit of cleanup 2017-12-08 10:16:08 -05:00
Davis King 07febbc9de Made this file executable 2017-12-08 10:09:50 -05:00
Davis King 4ff442324b cleaned up cmake 2017-12-08 10:09:30 -05:00
visionworkz ac292309c1 Exposed jitter_image in Python and added an example (#980)
* Exposed jitter_image in Python and added an example

* Return Numpy array directly

* Require numpy during setup

* Added install of Numpy before builds

* Changed pip install for user only due to security issues.

* Removed malloc

* Made presence of Numpy during compile optional.

* Conflict

* Refactored get_face_chip/get_face_chips to use Numpy as well.
2017-12-08 09:59:27 -05:00
Davis King a865e63552 Fixed spelling error that broke cmake just now. 2017-12-08 09:39:52 -05:00
Davis King c451ac11b7 merged 2017-12-08 09:09:47 -05:00
Davis King a2e90af5b6 Added USE_NEON_INSTRUCTIONS cmake option. 2017-12-08 09:08:09 -05:00
Davis King 52972a7380 Fixed incorrect printing of blas warning message. 2017-12-08 09:07:49 -05:00
Kino f9af9f8bfd don't look for OpenMP with Apple Clang (#1002)
look at issue:
https://github.com/davisking/dlib/issues/674
2017-12-07 16:53:29 -05:00
Davis King 27b2d03432 Merge 2017-12-07 08:16:52 -05:00
Davis King 0ff862aeb7 Changed the windows signaler and mutex code to use the C++11 thread library instead of
the old win32 functions.  I did this to work around how windows unloads dlls.  In particular, during dll unload
windows will kill all threads, THEN it will destruct global objects.  So this leads to problems
where a global obejct that owns threads tries to tell them to shutdown and everything goes wrong.
The specific problem this code change fixes is when signaler::broadcast() is called
on a signaler that was being waited on by one of these abruptly killed threads.  In that case, the old
code would deadlock inside signaler::broadcast().  This new code doesn't seem to have that problem,
thereby mitigating the windows dll unload behavior in some situations.
2017-12-07 08:11:55 -05:00
Davis King 95d16fd05e Added a missing assert. 2017-12-05 15:10:41 -05:00
Davis King c83838c3e5 Filled out the documentation 2017-12-04 21:47:43 -05:00
Davis King 354542685d merged 2017-12-04 17:11:18 -05:00
Davis King 71fca2a164 Fixed shape_predictor_trainer padding so that it behaves as it used to. In
dlib 19.7 the padding code was changed and accidentally doubled the size of the
applied padding when in the older (and still default) landmark_relative padding
mode.  It's not a huge deal either way, but this change reverts back to the
intended behavior.
2017-12-04 17:09:18 -05:00
Davis King 7d91dfccaa Changed test_regression_function() and cross_validate_regression_trainer() to
output correlation rather than squared correlation.
2017-12-04 12:57:38 -05:00
Davis King 155bf30da9 Fixed a very small bias in rand::get_random_double() 2017-12-04 12:29:51 -05:00
Davis King b4644120e6 Fixed grammar 2017-12-02 15:21:23 -05:00
Davis King 3cffafad8c Changed example to use minimization rather than maximization. 2017-12-02 08:55:24 -05:00
Davis King 5e8e997bf7 Added python interface to find_min_global() 2017-12-02 08:55:02 -05:00
Davis King e273f5159d Added find_min_global() overloads. 2017-12-02 07:53:31 -05:00
Davis King 3284f5759a Updated to use thread_local instead of old thread_specific_data class. 2017-12-01 20:15:16 -05:00
Davis King 00910ac1d5 Fixed stack trace macro 2017-12-01 19:53:57 -05:00
Davis King 4485543e73 Minor cleanup 2017-12-01 07:17:09 -05:00
Facundo Galán f3ec2abb1b disjoint_subsets: add get_number_of_sets and get_size_of_set functions (#880)
* disjoint_subsets: make clear and sizde functions noexcept

* disjoint_subsets: add get_number_of_sets function, documentations and tests

* disjoint_subsets: add get_size_of_set function, documentation and tests

* Split disjoint_subsets in a hierarchy.

Modify disjoint_subsets to make it a valid base class.
Add new class disjoint_subsets_sized, with information about number of
subsets, and size of each subset.
Add necessary unit test for the new class.

* Replace tabs by spaces.

* Remove virtuals from disjoint_subsets, and modify
disjoint_subsets_sized.

Now disjoint_subsets_sized is implemented in terms of disjoint_subsets,
instead of inherit from it.
2017-12-01 07:08:37 -05:00
Davis King 15c04ab224 This example still uses a lot of visual studio ram. 2017-12-01 00:26:31 -05:00
Davis King 2b3d8609e5 These examples compile now in visual studio due to the recent pragma directive added to core.h. 2017-11-30 22:38:29 -05:00
Davis King c409c363d3 Added a pragma statement that tells visual studio to not recursively inline
functions very much when using the dnn tools, since otherwise it will sometimes
take hours to compile.
2017-11-30 22:14:04 -05:00
Davis King 4fa32903a6 clarified docs 2017-11-25 19:28:53 -05:00
Davis King 4d0b203541 Just moved the try block to reduce the indentation level. 2017-11-25 12:25:35 -05:00
Davis King 929870d3ad Updated example to use C++11 style code and also to show the new find_max_global() routine. 2017-11-25 12:23:43 -05:00
Davis King ed9beffa33 Added python example for find_max_global() 2017-11-25 10:23:36 -05:00
Davis King c3f2c1dfcb Added a python interface to find_max_global() 2017-11-25 10:07:00 -05:00
Davis King e302a61e30 Clarified spec 2017-11-25 09:57:33 -05:00
Davis King 89d3fe4ee5 Fixed find_max_global() overload that was ignoring one of its arguments. 2017-11-25 09:29:36 -05:00
Davis King 1aa6667481 Switched this example to use the svm C instead of nu trainer. 2017-11-25 08:26:16 -05:00
Davis King 0e7e433096 Minor changes to avoid compiler warnings. 2017-11-25 08:07:36 -05:00
Davis King f3ecb81f5e Changed default solver epsilon so that it will solve to full floating point
precision by default.  If the user is OK with less precision they can change
it.
2017-11-25 08:00:08 -05:00
Davis King 2ad9cd7843 Fixing code for visual studio 2017-11-25 07:42:48 -05:00
Davis King 04991b7da6 Made this example program use the new find_max_global() instead of grid search
and BOBYQA.  This greatly simplifies the example.
2017-11-24 22:04:25 -05:00
Davis King fc6cce9f89 Made find_max_global() automatically apply a log-scale transform to variables
that obviously need it.
2017-11-24 21:44:09 -05:00
Davis King 99621934ff Added more docs 2017-11-24 17:22:37 -05:00
Davis King 1fbd1828ab Cleaned up the code a bit. 2017-11-24 09:56:26 -05:00
Davis King 0d9043bc09 Renamed find_global_maximum() to find_max_global() 2017-11-23 11:43:32 -05:00
Davis King 31875cfd45 More cleanup and added more tests 2017-11-23 11:30:01 -05:00
Davis King a6fd69298a Fixed spec 2017-11-23 10:57:00 -05:00
Davis King ec7a4af1d5 A bit of cleanup and documentation 2017-11-23 10:03:11 -05:00
Davis King e7e5d23802 Fixed and clarified spec 2017-11-23 07:59:10 -05:00
Davis King 66a5a9c407 merged 2017-11-22 13:13:57 -05:00
Davis King 9a8f312127 Made the loss dumping between learning rate changes a little more relaxed. In
particular, rather than just dumping exactly 400 of the last loss values, it
now dumps 400 + 10% of the loss buffer.  This way, the amount of the dump is
proportional to the steps without progress threshold.  This is better because
when the user sets the steps without progress to something larger it probably
means you need to look at more loss values to determine that we should stop,
so dumping more in that case ought to be better.
2017-11-22 13:06:19 -05:00