Commit Graph

5179 Commits

Author SHA1 Message Date
Davis King 59b44849bd fix typo, doesn't really matter, but still 2020-08-13 07:47:59 -04:00
Davis King 02e70ce3ca Record last changeset and set PATCH version to 99 2020-08-08 15:30:37 -04:00
Davis King 9117bd7843 Created release v19.21 2020-08-08 15:26:07 -04:00
Davis King 2c70aad12c Use a cache to avoid calls to the cuDNN algorithm selection routines. 2020-08-07 16:24:28 -04:00
Davis King 8910445a7a fix some spelling and grammar errors 2020-08-07 15:41:42 -04:00
Davis King 4721075314 More optimization unit tests 2020-08-07 09:57:12 -04:00
Davis King a9d554a4ac minor cleanup 2020-08-05 08:13:58 -04:00
yuriio ff3023f266
Added possibility to load PNG images from a data buffer. (#2137)
* Added possibility to load PNG images from a data buffer.

* Fixed code not compiling with some versions of libpng that doesn't have const specifier.

* Used FileInfo struct as a single parameter for the read_image method.
2020-08-05 08:11:46 -04:00
Davis King 7b564927d6 Switching to what is hopefully a better fix for the following CUDA error
error: calling a constexpr host function("log1p") from a device function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

The error only happens with some versions of CUDA.
2020-08-01 13:48:30 -04:00
Davis King f8cfe63904 Avoid unnecessairly asking cuDNN which algorithms to use, since this is slow in cuDNN 8.0 2020-08-01 13:45:38 -04:00
Davis King 6c3243f766 Cleanup cuDNN conv algorithm selection code slightly by moving it into its own function. 2020-08-01 13:33:39 -04:00
Davis King 4d18e0d0c7 oops, fixing a weird typo 2020-07-26 15:13:20 -04:00
Davis King 3400e163e8 tweaked cca test thresholds to avoid false positives 2020-07-26 12:43:21 -04:00
Davis King 943408d2d2 Allow forwarding initial function evaluations into find_max_global() 2020-07-26 12:43:21 -04:00
Davis King 5a80ca9e5f Apply --expt-relaxed-constexpr to all older versions of cuda. 2020-07-24 23:50:22 -04:00
jbfove 5650ce45a1
Fix restoration of MSVC warnings in public headers (#2135)
Previously they were restored to default values, which had the effect of negating the current setting of the calling code (whether set in the compiler options or by pragma previously)
2020-07-22 06:07:49 -04:00
Davis King 23b9abd07a Switch cuda target architecture from sm_30 to sm_50. I.e. Maxwell instead of Kepler. 2020-07-11 21:07:36 -04:00
stoperro a2498dc47c
Additional documentation for failed dlib::layer<> use. (#2118) 2020-06-28 11:35:15 -04:00
Davis King b9f4da5522 Make cuDNN test project failure print a message saying exactly why it failed. 2020-06-21 08:48:02 -04:00
Davis King facefa0204 Fix random foreset regression not doing quite the right thing. 2020-06-20 14:44:30 -04:00
Davis King fe803b566f add support for cudnn 8.0 2020-06-20 09:43:17 -04:00
Davis King 1515adc744 work around a bug in nvcc 2020-06-10 08:19:59 -04:00
Davis King 53b6ea3bf5 Record last changeset and set PATCH version to 99 2020-06-06 14:58:34 -04:00
Davis King 5612caa169 Created release v19.20 2020-06-06 14:53:52 -04:00
Davis King 883101477d minor cleanup 2020-06-01 08:36:16 -04:00
stoperro a83242014e
Corrected interpolate_bilinear for lab_pixel. (#2091)
* * Corrected interpolate_bilinear for non-RGB images not to collapse into grayscale (#2089)

* * interpolate_bilinear uses now pixel_to_vector for shorter code.

* pixels now have operator!=.

* * Explicitely float interpolation

* Using C++11 static_assert() in interpolation.

* * Corrected documentation for interpolate_bilinear, interpolate_quadratic

* * Corrected formatting near interpolate_bilinear
2020-06-01 08:35:44 -04:00
Davis King 693aa0a719 fix build errors in cuda 10.2 2020-05-14 22:06:18 -04:00
Juha Reunanen c7062aa363
Minor optimization: add shortcut to in-place image resize if size_scale is 1 (#2076) 2020-05-04 21:10:31 -04:00
Davis King 253745d29f fix typo in comment 2020-04-19 13:57:16 -04:00
Davis King a2e45f00b2 Reduce code duplication a bit and make equal_error_rate() give correct results when called on data where all detection scores are identical.
Previously it would say the EER was 0, but really it should have said 1 in this case.
2020-04-18 13:57:56 -04:00
Davis King 0e923cff93 A little bit of cleanup 2020-04-18 09:30:59 -04:00
Adrià Arrufat 55e9c890fd
Add cuda implementation for loss_mean_squared_per_channel_and_pixel (#2053)
* wip: attempt to use cuda for loss mse channel

* wip: maybe this is a step in the right direction

* Try to fix dereferencing the truth data (#1)

* Try to fix dereferencing the truth data

* Fix memory layout

* fix loss scaling and update tests

* rename temp1 to temp

* readd lambda captures for output_width and output_height

clangd was complaining about this, and suggested me to remove them
in the first, place:

```
Lambda capture 'output_height' is not required to be captured for this use (fix available)
Lambda capture 'output_width' is not required to be captured for this use (fix available)
```

* add a weighted_loss typedef to loss_multiclass_log_weighted_ for consistency

* update docs for weighted losses

* refactor multi channel loss and add cpu-cuda tests

* make operator() const

* make error relative to the loss value

Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>
2020-04-18 09:29:46 -04:00
Davis King b42722a75d Fix DLIB_ISO_CPP_ONLY not working 2020-04-14 07:49:28 -04:00
Adrià Arrufat b44d9465f6
Fix warning in dnn_trainer initialization list (#2049)
The thread pool was initialized after the network, so it lead to a
reorder warning in GCC 9.3.0
2020-04-03 07:58:05 -04:00
Davis King 237746fc13 disable in source builds 2020-03-31 19:41:38 -04:00
Adrià Arrufat e9c56fb21a
Fix warnings while running the tests (#2046)
* fix some warnings when running tests

* rever changes in CMakeLists.txt

* update example make use of newly promoted method

* update tests to make use of newly promoted methods
2020-03-31 19:35:23 -04:00
Adrià Arrufat d1d96e380c
remove branch from cuda kernel (#2045)
* remove branch from cuda kernel

* promote lambda to a global function
2020-03-31 19:33:25 -04:00
Davis King 0057461a62 Promote some of the sub-network methods into the add_loss_layer interface so users don't have to write .subnet() so often. 2020-03-29 12:17:56 -04:00
Davis King c79f64f52d make update_parameters() a little more uniform 2020-03-29 11:19:37 -04:00
Davis King fd0145345e Fix Use of C++17 deprecated feature: std::iterator #2036 2020-03-29 11:08:07 -04:00
Adrià Arrufat f42f100d0f
Add DCGAN example (#2035)
* wip: dcgan-example

* wip: dcgan-example

* update example to use leaky_relu and remove bias from net

* wip

* it works!

* add more comments

* add visualization code

* add example documentation

* rename example

* fix comment

* better comment format

* fix the noise generator seed

* add message to hit enter for image generation

* fix srand, too

* add std::vector overload to update_parameters

* improve training stability

* better naming of variables

make sure it is clear we update the generator with the discriminator's
gradient using fake samples and true labels

* fix comment: generator -> discriminator

* update leaky_relu docs to match the relu ones

* replace not with !

* add Davis' suggestions to make training more stable

* use tensor instead of resizable_tensor

* do not use dnn_trainer for discriminator
2020-03-29 11:07:38 -04:00
Adrià Arrufat d610e56c2a
add leaky_relu activation layer (#2033)
* add leaky_relu activation layer

* add inplace case for leaky_relu and test_layer

* make clear that alpha is not learned by leaky_relu

* remove branch from cuda kernel
2020-03-21 11:29:20 -04:00
Juha Reunanen 74123841bb
To avoid a GPU memory leak, allow passing thread pools to dnn_trainer from outside (#2027)
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.

Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.

* Add helpful comments
2020-03-19 07:38:43 -04:00
scott-vsi 6fc503d242
link against openblasp (#2028)
openblasp is a parallel implementation of openblas with pthreads found on Centos/Fedora
2020-03-18 21:45:51 -04:00
Adrià Arrufat 1380e6b95f
add loss multiclass log weighted (#2022)
* add loss_multiclass_log_weighted

* fix class name in loss_abstract

* add loss_multiclass_log_weighted test

* rename test function to match class name

* fix typo

* reuse the weighted label struct across weighted losses

* do not break compatibility with loss_multiclass_log_per_pixel_weighted

* actually test the loss and fix docs

* fix build with gcc 9
2020-03-18 08:33:54 -04:00
hwiesmann 9185a925ce
Integer conversions generating compiler warnings (#2024)
* Prevention of compiler warning due to usage of int instead of a size type

* Conversion of status type to long to prevent compiler warnings

* The returned number of read items from a buffer is specified in numbers of type "streamsize"

Co-authored-by: Hartwig <git@skywind.eu>
2020-03-14 19:12:04 -04:00
Facundo Galán 08aeada7d5
Replace result_of by invoke_result for C++17 and above (#2021)
Co-authored-by: Facundo Galan <fgalan@danaide.com.ar>
2020-03-13 07:53:40 -04:00
scott-vsi c8a175f569
effect -> affect (#2019) 2020-03-11 22:56:07 -04:00
Davis King 7b35d7b234 removed inappropriate assert 2020-03-10 20:42:42 -04:00
hwiesmann e7087e5957
Prevention of compiler warning (#2015)
Co-authored-by: Hartwig <git@skywind.eu>
2020-03-10 20:02:02 -04:00