Commit Graph

7891 Commits

Author SHA1 Message Date
Davis King 0e923cff93 A little bit of cleanup 2020-04-18 09:30:59 -04:00
Adrià Arrufat 55e9c890fd
Add cuda implementation for loss_mean_squared_per_channel_and_pixel (#2053)
* wip: attempt to use cuda for loss mse channel

* wip: maybe this is a step in the right direction

* Try to fix dereferencing the truth data (#1)

* Try to fix dereferencing the truth data

* Fix memory layout

* fix loss scaling and update tests

* rename temp1 to temp

* readd lambda captures for output_width and output_height

clangd was complaining about this, and suggested me to remove them
in the first, place:

```
Lambda capture 'output_height' is not required to be captured for this use (fix available)
Lambda capture 'output_width' is not required to be captured for this use (fix available)
```

* add a weighted_loss typedef to loss_multiclass_log_weighted_ for consistency

* update docs for weighted losses

* refactor multi channel loss and add cpu-cuda tests

* make operator() const

* make error relative to the loss value

Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>
2020-04-18 09:29:46 -04:00
Davis King b42722a75d Fix DLIB_ISO_CPP_ONLY not working 2020-04-14 07:49:28 -04:00
Davis King fbb2db2188 fix example cmake script 2020-04-04 09:55:08 -04:00
Adrià Arrufat b44d9465f6
Fix warning in dnn_trainer initialization list (#2049)
The thread pool was initialized after the network, so it lead to a
reorder warning in GCC 9.3.0
2020-04-03 07:58:05 -04:00
Adrià Arrufat 5a715fe24d
Remove outdated comment from DCGAN example (#2048)
* Remove outdated comment

That comment was there from when I was using a dnn_trainer to train
the discriminator network.

* Fix case
2020-04-02 07:14:42 -04:00
Davis King 237746fc13 disable in source builds 2020-03-31 19:41:38 -04:00
Adrià Arrufat e9c56fb21a
Fix warnings while running the tests (#2046)
* fix some warnings when running tests

* rever changes in CMakeLists.txt

* update example make use of newly promoted method

* update tests to make use of newly promoted methods
2020-03-31 19:35:23 -04:00
Adrià Arrufat d1d96e380c
remove branch from cuda kernel (#2045)
* remove branch from cuda kernel

* promote lambda to a global function
2020-03-31 19:33:25 -04:00
Adrià Arrufat 57bb5eb58d
use running stats to track losses (#2041) 2020-03-30 20:20:50 -04:00
Davis King 0057461a62 Promote some of the sub-network methods into the add_loss_layer interface so users don't have to write .subnet() so often. 2020-03-29 12:17:56 -04:00
Davis King c79f64f52d make update_parameters() a little more uniform 2020-03-29 11:19:37 -04:00
Davis King fd0145345e Fix Use of C++17 deprecated feature: std::iterator #2036 2020-03-29 11:08:07 -04:00
Adrià Arrufat f42f100d0f
Add DCGAN example (#2035)
* wip: dcgan-example

* wip: dcgan-example

* update example to use leaky_relu and remove bias from net

* wip

* it works!

* add more comments

* add visualization code

* add example documentation

* rename example

* fix comment

* better comment format

* fix the noise generator seed

* add message to hit enter for image generation

* fix srand, too

* add std::vector overload to update_parameters

* improve training stability

* better naming of variables

make sure it is clear we update the generator with the discriminator's
gradient using fake samples and true labels

* fix comment: generator -> discriminator

* update leaky_relu docs to match the relu ones

* replace not with !

* add Davis' suggestions to make training more stable

* use tensor instead of resizable_tensor

* do not use dnn_trainer for discriminator
2020-03-29 11:07:38 -04:00
Adrià Arrufat d610e56c2a
add leaky_relu activation layer (#2033)
* add leaky_relu activation layer

* add inplace case for leaky_relu and test_layer

* make clear that alpha is not learned by leaky_relu

* remove branch from cuda kernel
2020-03-21 11:29:20 -04:00
Juha Reunanen 74123841bb
To avoid a GPU memory leak, allow passing thread pools to dnn_trainer from outside (#2027)
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.

Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.

* Add helpful comments
2020-03-19 07:38:43 -04:00
scott-vsi 6fc503d242
link against openblasp (#2028)
openblasp is a parallel implementation of openblas with pthreads found on Centos/Fedora
2020-03-18 21:45:51 -04:00
Adrià Arrufat 1380e6b95f
add loss multiclass log weighted (#2022)
* add loss_multiclass_log_weighted

* fix class name in loss_abstract

* add loss_multiclass_log_weighted test

* rename test function to match class name

* fix typo

* reuse the weighted label struct across weighted losses

* do not break compatibility with loss_multiclass_log_per_pixel_weighted

* actually test the loss and fix docs

* fix build with gcc 9
2020-03-18 08:33:54 -04:00
hwiesmann 9185a925ce
Integer conversions generating compiler warnings (#2024)
* Prevention of compiler warning due to usage of int instead of a size type

* Conversion of status type to long to prevent compiler warnings

* The returned number of read items from a buffer is specified in numbers of type "streamsize"

Co-authored-by: Hartwig <git@skywind.eu>
2020-03-14 19:12:04 -04:00
Facundo Galán 08aeada7d5
Replace result_of by invoke_result for C++17 and above (#2021)
Co-authored-by: Facundo Galan <fgalan@danaide.com.ar>
2020-03-13 07:53:40 -04:00
scott-vsi c8a175f569
effect -> affect (#2019) 2020-03-11 22:56:07 -04:00
Davis King 7b35d7b234 removed inappropriate assert 2020-03-10 20:42:42 -04:00
hwiesmann e7087e5957
Prevention of compiler warning (#2015)
Co-authored-by: Hartwig <git@skywind.eu>
2020-03-10 20:02:02 -04:00
Adrià Arrufat c832d3b2fc
simplify resnet definition by reusing struct template parameter (#2010)
* simplify definition by reusing struct template parameter

* put resnet into its own namespace

* fix infer names

* rename struct impl to def
2020-03-09 21:21:04 -04:00
Davis King 3a53c78ad2 increment imglab version 2020-02-29 09:34:36 -05:00
Davis King 9a33669610 A little bit of cleanup and docs. Also added missing mutex lock. 2020-02-29 09:33:00 -05:00
martin 4ff365a530
imglab: chinese ("automatic") clustering, keyboard shortcuts for zooming (#2007)
* imglab: add support for using chinese whispers for more automatic clustering

* widgets: refactor out zooming from wheel handling

* tools/imglab/src/metadata_editor.cpp

imglab: add keyboard shortcuts for zooming
2020-02-29 09:31:28 -05:00
Davis King fc6992ac04 A little bit of cleanup 2020-02-07 08:12:18 -05:00
Adrià Arrufat 10d7f119ca
Add dnn_introduction3_ex (#1991)
* Add dnn_introduction3_ex
2020-02-07 07:59:36 -05:00
Davis King c90cb0bc14 Remove unit tests for python 2.7 since that version of pyhton is dead,
and the unit test servers don't even support it anymore.
2020-01-30 19:41:26 -05:00
Davis King f5c828291d Added note about vcpk availability 2020-01-30 19:36:49 -05:00
Davis King b13840a86f Fixed code needing C++14 to use C++11 features instead. 2020-01-29 07:30:59 -05:00
Juha Reunanen 46bcd2059e
If nearest-neighbor interpolation is wanted, then don't use an image pyramid. (#1986) 2020-01-28 21:03:39 -05:00
Hye Sung Jung 443021882c
fix spelling errors (#1985) 2020-01-28 21:02:41 -05:00
Julien Schueller 870f49a636
Do not link to libnsl (#1987)
Dlib does not use nsl symbols, why was this necessary ?
This make conda-forge build fail
2020-01-28 21:01:14 -05:00
Davis King 20d02b80e7 run tests for python 3.8 on travis ci 2020-01-27 07:53:19 -05:00
Davis King 0c415dbb4c Add little test 2020-01-20 07:58:50 -05:00
Davis King f71e49f28e remove unused variables 2020-01-20 07:58:07 -05:00
Davis King d88b2575a1 Make copying non-const cuda_data_ptrs to const ones nicer. 2020-01-20 07:57:33 -05:00
Juha Reunanen bd6994cc66 Add new loss layer for binary loss per pixel (#1976)
* Add new loss layer for binary loss per pixel
2020-01-20 07:47:47 -05:00
Davis King 6bdd289f73 Added static_pointer_cast() for casting cuda_data_void_ptr to
cuda_data_ptr<T>.  Also moved some memcpy() functions to namespace scope
so that calling them like dlib::cuda::memcpy() can referene them.  It
was slightly annoting before.
2020-01-18 13:27:25 -05:00
Davis King 2326a72281 fix code not compiling with some versions of libjpeg as a result of the change I just made. 2020-01-18 13:07:59 -05:00
Davis King e1b667181b Fixed const correctness on the in-memory jpeg loading code. 2020-01-18 11:53:46 -05:00
Davis King a0af6b7afd tweaked docs 2020-01-17 20:33:47 -05:00
Adrià Arrufat 60dad52c12 add visitor to count net parameters (#1977) 2020-01-17 20:32:19 -05:00
Juha Reunanen 356bba38fe Minor fix: print to console only if the verbose flag is on (#1980) 2020-01-16 20:23:47 -05:00
Manjunath Bhat d766f5e82e Adding Mish activation function (#1938)
* Adding Mish activation function

* Bug fixed

* Added test for Mish

* Removed unwanted comments

* Simplified calculation and removed comments

* Kernel added and gradient computation simplified

* Gradient simplified

* Corrected gradient calculations

* Compute output when input greater than 8

* Minor correction

* Remove unnecessary pgrad for Mish

* Removed CUDNN calls

* Add standalone CUDA implementation of the Mish activation function

* Fix in-place gradient in the CUDA version; refactor a little

* Swap delta and omega

* Need to have src (=x) (and not dest) available for Mish

* Add test case that makes sure that cuda::mish and cpu::mish return the same results

* Minor tweaking to keep the previous behaviour

Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>
2020-01-15 06:04:02 -05:00
Davis King a82bf1536e omg, github, cleanup merge stuff 2020-01-14 21:26:25 -05:00
Davis King b70c0a6f80 Merge branch 'thebhatman-Mish' 2020-01-14 21:24:57 -05:00
thebhatman c454bdc182 Added test for Mish 2020-01-14 21:24:18 -05:00