Commit Graph

7862 Commits

Author SHA1 Message Date
Davis King c90cb0bc14 Remove unit tests for python 2.7 since that version of pyhton is dead,
and the unit test servers don't even support it anymore.
2020-01-30 19:41:26 -05:00
Davis King f5c828291d Added note about vcpk availability 2020-01-30 19:36:49 -05:00
Davis King b13840a86f Fixed code needing C++14 to use C++11 features instead. 2020-01-29 07:30:59 -05:00
Juha Reunanen 46bcd2059e
If nearest-neighbor interpolation is wanted, then don't use an image pyramid. (#1986) 2020-01-28 21:03:39 -05:00
Hye Sung Jung 443021882c
fix spelling errors (#1985) 2020-01-28 21:02:41 -05:00
Julien Schueller 870f49a636
Do not link to libnsl (#1987)
Dlib does not use nsl symbols, why was this necessary ?
This make conda-forge build fail
2020-01-28 21:01:14 -05:00
Davis King 20d02b80e7 run tests for python 3.8 on travis ci 2020-01-27 07:53:19 -05:00
Davis King 0c415dbb4c Add little test 2020-01-20 07:58:50 -05:00
Davis King f71e49f28e remove unused variables 2020-01-20 07:58:07 -05:00
Davis King d88b2575a1 Make copying non-const cuda_data_ptrs to const ones nicer. 2020-01-20 07:57:33 -05:00
Juha Reunanen bd6994cc66 Add new loss layer for binary loss per pixel (#1976)
* Add new loss layer for binary loss per pixel
2020-01-20 07:47:47 -05:00
Davis King 6bdd289f73 Added static_pointer_cast() for casting cuda_data_void_ptr to
cuda_data_ptr<T>.  Also moved some memcpy() functions to namespace scope
so that calling them like dlib::cuda::memcpy() can referene them.  It
was slightly annoting before.
2020-01-18 13:27:25 -05:00
Davis King 2326a72281 fix code not compiling with some versions of libjpeg as a result of the change I just made. 2020-01-18 13:07:59 -05:00
Davis King e1b667181b Fixed const correctness on the in-memory jpeg loading code. 2020-01-18 11:53:46 -05:00
Davis King a0af6b7afd tweaked docs 2020-01-17 20:33:47 -05:00
Adrià Arrufat 60dad52c12 add visitor to count net parameters (#1977) 2020-01-17 20:32:19 -05:00
Juha Reunanen 356bba38fe Minor fix: print to console only if the verbose flag is on (#1980) 2020-01-16 20:23:47 -05:00
Manjunath Bhat d766f5e82e Adding Mish activation function (#1938)
* Adding Mish activation function

* Bug fixed

* Added test for Mish

* Removed unwanted comments

* Simplified calculation and removed comments

* Kernel added and gradient computation simplified

* Gradient simplified

* Corrected gradient calculations

* Compute output when input greater than 8

* Minor correction

* Remove unnecessary pgrad for Mish

* Removed CUDNN calls

* Add standalone CUDA implementation of the Mish activation function

* Fix in-place gradient in the CUDA version; refactor a little

* Swap delta and omega

* Need to have src (=x) (and not dest) available for Mish

* Add test case that makes sure that cuda::mish and cpu::mish return the same results

* Minor tweaking to keep the previous behaviour

Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>
2020-01-15 06:04:02 -05:00
Davis King a82bf1536e omg, github, cleanup merge stuff 2020-01-14 21:26:25 -05:00
Davis King b70c0a6f80 Merge branch 'thebhatman-Mish' 2020-01-14 21:24:57 -05:00
thebhatman c454bdc182 Added test for Mish 2020-01-14 21:24:18 -05:00
Davis King 8a91a7c7c1 Merge branch 'thebhatman-Mish' 2020-01-14 21:19:15 -05:00
thebhatman edff12d2e1 Adding Mish activation function 2020-01-14 21:18:28 -05:00
Davis King cd5f0b0554 fixed failing tests due to recent default change in solver stopping criteria 2020-01-13 08:00:51 -05:00
Davis King 0c42dcca8d fixed test failure 2020-01-12 23:17:59 -05:00
Davis King adce342366 adjusted eps so tests still pass 2020-01-12 21:20:25 -05:00
Davis King 931fb52659 fixed test not building due to the commit I just made 2020-01-12 19:54:07 -05:00
Davis King 59d1b9d8c5 Added a relative epsilon termination option to svm_c_linear_trainer 2020-01-12 19:48:26 -05:00
Davis King 45731b863c Always check that the data give to cross_validate_trainer() is valid.
It's a cheap check, and easy for someone to forget about otherwise.
2020-01-12 19:36:53 -05:00
Davis King eae3caf9f8 Fixed function_evaluation_request::set() invalidating function_evaluation_request::x() 2020-01-11 21:12:34 -05:00
Davis King a53354eb79 updated docs 2020-01-09 22:31:16 -05:00
Adrià Arrufat fcc7a75cda Remove pca comment from vector_normalizer documentation (#1965) 2020-01-09 21:45:40 -05:00
jeffeDurand 54a9a5bbf3 Fix error for opencv 3.4.9+ over IplImage (#1949) (#1963) 2020-01-07 21:12:30 -05:00
Juha Reunanen e4998c13b3 Add sanity check (#1964) 2020-01-07 21:10:45 -05:00
Davis King b817bc1ea9 fixed check range to match the comment 2020-01-05 08:19:08 -05:00
Davis King 5557577c95 Even the newest CUDA runtime has a buggy cudaStreamSynchronize. 2020-01-05 08:17:25 -05:00
Davis King 3d5a3d7b9a fixed spelling error 2020-01-05 08:09:32 -05:00
Davis King 471c3d30e1 fix formatting 2019-12-28 08:31:31 -05:00
Davis King a4bf6e1e6a cleanup cv_image code. This also fixes a build error with the very latest version of OpenCV. 2019-12-28 08:29:22 -05:00
Davis King 34dc730304 Fix opencv version check to work on all opencv versions 2019-12-22 07:52:08 -05:00
Davis King 131e459809 Update to work with latest version of OpenCV 2019-12-21 09:48:56 -05:00
Davis King 2c7e625a15 Record last changeset and set PATCH version to 99 2019-12-14 14:10:30 -05:00
Davis King d71497d466 Created release v19.19 2019-12-14 14:08:33 -05:00
Davis King 591331f941 updated docs 2019-12-14 14:07:20 -05:00
Davis King b0e3c36020 deleted old, wrong, and duplicative function docs 2019-12-04 21:50:01 -05:00
Davis King e88e166e98 slightly nicer default behavior 2019-11-29 07:45:50 -05:00
Davis King f2cd9e3b1d use a time based exeuction limit in example 2019-11-28 10:48:02 -05:00
Davis King 501d17f693 Made find_max_global() and its overloads measure the execution speed of
the user provided objective function.  If it's faster than the LIPO
upper bounding Monte Carlo model then we skip or limit the Monte Carlo
stuff and just fun the objective function more often.  Previously,
find_max_global() simply assumed the objective functions were really
expensive to invoke.

TLDR: this change makes find_max_global() run a lot faster on objective
functions that are themselves very fast to execute, since it will skip
the expensive Monte Carlo modeling and just call the objective function
a bunch instead.
2019-11-28 10:42:27 -05:00
Davis E. King e82e2ceb2b
Fix build error in some versions of visual studio 2019-11-15 00:24:27 -05:00
Juha Reunanen d175c35074 Instance segmentation (#1918)
* Add instance segmentation example - first version of training code

* Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront

* Improve console output

* Set filter count

* Minor tweaking

* Inference - first version, at least compiles!

* Ignore overlapped boxes

* Ignore even small instances

* Set overlaps_ignore

* Add TODO remarks

* Revert "Set overlaps_ignore"

This reverts commit 65adeff1f8.

* Set result size

* Set label image size

* Take ignore-color into account

* Fix the cropping rect's aspect ratio; also slightly expand the rect

* Draw the largest findings last

* Improve masking of the current instance

* Add some perturbation to the inputs

* Simplify ground-truth reading; fix random cropping

* Read even class labels

* Tweak default minibatch size

* Learn only one class

* Really train only instances of the selected class

* Remove outdated TODO remark

* Automatically skip images with no detections

* Print to console what was found

* Fix class index problem

* Fix indentation

* Allow to choose multiple classes

* Draw rect in the color of the corresponding class

* Write detector window classes to ostream; also group detection windows by class (when ostreaming)

* Train a separate instance segmentation network for each classlabel

* Use separate synchronization file for each seg net of each class

* Allow more overlap

* Fix sorting criterion

* Fix interpolating the predicted mask

* Improve bilinear interpolation: if output type is an integer, round instead of truncating

* Add helpful comments

* Ignore large aspect ratios; refactor the code; tweak some network parameters

* Simplify the segmentation network structure; make the object detection network more complex in turn

* Problem: CUDA errors not reported properly to console
Solution: stop and join data loader threads even in case of exceptions

* Minor parameters tweaking

* Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value

* Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing

* Improve behaviour when loss actually increased after disk sync

* Revert some of the earlier change

* Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync

* Revert "Revert some of the earlier change"

This reverts commit 6c852124ef.

* Keep enough previous loss values, until the disk sync

* Fix maintaining the dumped (now "effectively disregarded") loss values count

* Detect cats instead of aeroplanes

* Add helpful logging

* Clarify the intention and the code

* Review fixes

* Add operator== for the other pixel types as well; remove the inline

* If available, use constexpr if

* Revert "If available, use constexpr if"

This reverts commit 503d4dd335.

* Simplify code as per review comments

* Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh

* Clarify console output

* Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh"

This reverts commit 9191ebc776.

* To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now)

* Even empty out some of the previous test loss values

* Minor review fixes

* Can't use C++14 features here

* Do not use the struct name as a variable name
2019-11-14 22:53:16 -05:00