Also added an operator<< for dnn_trainer that prints the parameters it's using.
These changes also break backwards compatibility with the previous
serialization format for dnn_trainer objects.
checks if the user is giving truth boxes that can't be detected because the
non-max suppression settings would prevent them from being output at the same
time. If this happens then we print a warning message and set one of the
offending boxes to "ignore".
* Add per-pixel mean square loss
* Add documentation of loss_mean_squared_per_pixel_
* Add test case for per-pixel mean square loss: a simple autoencoder
* Review fix: reorder params of function tensor_index, so that the order corresponds to the convention used in the rest of the dlib code base
* Review fix: add breaks as intended, and change the rest of the test accordingly
* Again a case where the tests already work locally for me, but not on AppVeyor/Travis - this commit is a blindfolded attempt to fix the problem
(and it also fixes a compiler warning)
* remove linking to libpython on linux
* add OSX libpython free building
* add automatic discovery of include python dir back in
* make the libs non required for building on manylinux
Test on a given video like this cv::VideoCapture cap("Sample.avi") may be broken when the video frames are not enough before the main window is closed by the user.
* Add new loss for weighted pixel inputs (may be useful e.g. to emphasize rare classes)
* Deduplicate method loss_multiclass_log_per_pixel_(weighted_)::to_label
* Add a simple test case for weighted inputs
(also, fix a typo in test_tensor_resize_bilienar's name)
* Add loss_multiclass_log_per_pixel_weighted_ to loss_abstract.h
* Decrease the amount of weighting
* There's no need to train for a very long time