* Exposed jitter_image in Python and added an example
* Return Numpy array directly
* Require numpy during setup
* Added install of Numpy before builds
* Changed pip install for user only due to security issues.
* Removed malloc
* Made presence of Numpy during compile optional.
* Conflict
* Refactored get_face_chip/get_face_chips to use Numpy as well.
the old win32 functions. I did this to work around how windows unloads dlls. In particular, during dll unload
windows will kill all threads, THEN it will destruct global objects. So this leads to problems
where a global obejct that owns threads tries to tell them to shutdown and everything goes wrong.
The specific problem this code change fixes is when signaler::broadcast() is called
on a signaler that was being waited on by one of these abruptly killed threads. In that case, the old
code would deadlock inside signaler::broadcast(). This new code doesn't seem to have that problem,
thereby mitigating the windows dll unload behavior in some situations.
dlib 19.7 the padding code was changed and accidentally doubled the size of the
applied padding when in the older (and still default) landmark_relative padding
mode. It's not a huge deal either way, but this change reverts back to the
intended behavior.
* disjoint_subsets: make clear and sizde functions noexcept
* disjoint_subsets: add get_number_of_sets function, documentations and tests
* disjoint_subsets: add get_size_of_set function, documentation and tests
* Split disjoint_subsets in a hierarchy.
Modify disjoint_subsets to make it a valid base class.
Add new class disjoint_subsets_sized, with information about number of
subsets, and size of each subset.
Add necessary unit test for the new class.
* Replace tabs by spaces.
* Remove virtuals from disjoint_subsets, and modify
disjoint_subsets_sized.
Now disjoint_subsets_sized is implemented in terms of disjoint_subsets,
instead of inherit from it.
particular, rather than just dumping exactly 400 of the last loss values, it
now dumps 400 + 10% of the loss buffer. This way, the amount of the dump is
proportional to the steps without progress threshold. This is better because
when the user sets the steps without progress to something larger it probably
means you need to look at more loss values to determine that we should stop,
so dumping more in that case ought to be better.