be smaller. Instead, they now behave like std::vector in that they just change
their nominal size but keep the same memory, only reallocating if they are
resized to something larger than their underlying memory block.
This change makes some uses of dlib faster, in particular, running networks on
a large set of images of differing sizes will now run faster since there won't
be any GPU reallocations, which are notoriously slow.
when we switched everything to std::shared_ptr. Turns out std::shared_ptr has
some surprising limitations. This change fixes a bug where the program crashes or hangs
sometimes during program shutdown.
included in the edge graph. If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case. So basically this fixes a bug
where chinese_whispers(), when called from python, would sometimes return a labels array
that doesn't include labels for all the inputs.
dimensions in the same format as the mmod_options object (i.e. two lengths
measured in pixels). This should make defining random_cropping strategies that
are consistent with MMOD settings much more straightforward since you can just
take the mmod_options settings and give them to the random_cropper and it will
do the right thing.
* Use banded Cholesky factorization if possible
Computation cost from n.n.n -> n.n.b where b is band size
* Tidy up whitespace
* Remove typo
* Escape from banded matrix detection correctly
* Use LAPACK banded Cholesky factorisation where possible
* Add banded chol tests
* Add test for banded chol in column major layout
* Use row major layout for banded chol - more efficient as we will pass to LAPACK
early iterations since the model might produce a huge number of false alarms
while the detector is still bad. Processing all these detections can cause it
to run slowly until the model is good enough to avoid really excessive amounts
of false alarms. This change puts more of a limit on the number of false
alarms processed during those early iterations and avoids the slowdown.
*before* allocating new memory. It used to be the other way around which
caused momentary spikes of increased memory usage. This could put you over the
total memory available in some cases which is obviously less than ideal
behavior.
* Add get_net parameter allowing to call the function without forced flush to disk (see the discussion in #869)
* A blindfolded attempt to fix compile error on the CI server