Commit Graph

7862 Commits

Author SHA1 Message Date
Davis King 20a1477209 update docs 2020-09-19 07:21:52 -04:00
pfeatherstone ab346ddfa6
Extended proxy_(de)serialize objects to work with stringstream, ostringstream, istringstream and vector<char> (#2181)
* [DLIB] extended proxy objects to work with strinstream, istringstream, ostringstream and vector<char>

* [DLIB]  - use std::istream and std::ostream instead of std::istringstream, std::ostringstream and std::stringstream.
		- put back the filename member variable for better error messages

* [DLIB]  - review requirement

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-09-19 07:16:21 -04:00
Adrià Arrufat fa818b9a96
use DLIB_CASSERT to avoid unused variable warning in release compilation (#2182) 2020-09-17 22:54:06 -04:00
pfeatherstone d4fe74b5a8
vectorstream updates: added seekoff and seekpos (#2179)
* [DLIB] added seekpos and seekoff functions. These are necessary for functions in iostream base class to work properly. e.g. seekg. Note that in seekoff, you do NOT want to check the validity of read_pos after it has been updated. dlib::vectorstream and std::iostream work together to set EOF and/or badbit. Doing something like seekg(10000) should not throw even if the underlying buffer has 2 bytes. You should check if EOF is set and possibly call clear(). We have removed seekg from dlib::vectorstream as this adds confusion. Now std::iostream::seekg is called which somewhere down the callstack will call seekpos and/or seekoff. So there should be no diverging functionality between calling seekg on dlib::vectorstream& or std::iostream& when there is a cast.

* [DLIB] vectorstream unit test is updated to run identical tests on dlib::vectorstream& and std::iostream&

* [DLIB] only support read pointers and delete copy and move semantics

* [DLIB] explicit tests for seekg() in different directions

* [DLIB]  - no need to delete the move constructor and move assign operator. This is implicitly done by deleting the copy constructor and copy assign operator.

* [DLIB]  - remove leftover comments. no need
		- use more idiomatic notation

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-09-16 20:37:36 -04:00
Davis King cdeb2e067c add some docs 2020-09-12 21:52:21 -04:00
pfeatherstone 12a82f6542
Macro for generating default serialisation functions (#2177)
* [DLIB] macro for generating default serialisation functions

* [DLIB]  refactoring

* [DLIB]  refactoring
2020-09-12 21:18:46 -04:00
Adrià Arrufat 9d60949a3a
Add scale_prev layer (#2171)
* Add scale_prev layer

* remove comment and fix gradient

* add test for scale_ and scale_prev_ layers
2020-09-12 07:55:24 -04:00
Adrià Arrufat 77e6255fdd
Add error message for mismatched tensor sizes in dnn_trainer (#2165) 2020-09-08 07:16:15 -04:00
Davis King 40c3e48818 Simplified more uses of layer visiting and fixed constness bug
The const bug was introduced yesterday and caused some layer visiting to
not work on const networks.
2020-09-06 10:42:56 -04:00
Adrià Arrufat 5ec60a91c4
Show how to use the new visitors with lambdas (#2162) 2020-09-06 09:27:50 -04:00
Davis King 393db2490b switch this to C++11 code 2020-09-06 08:57:44 -04:00
Davis King 5bcbe617eb make type_safe_union movable and also support holding movable types in a natural way. 2020-09-06 08:53:54 -04:00
Davis King afe19fcb8b Made the DNN layer visiting routines more convenient.
Now the user doesn't have to supply a visitor capable of visiting all
layers, but instead just the ones they are interested in.  Also added
visit_computational_layers() and visit_computational_layers_range()
since those capture a very common use case more concisely than
visit_layers().  That is, users generally want to mess with the
computational layers specifically as those are the stateful layers.
2020-09-05 18:33:04 -04:00
Davis King 7dcc7b4ebc Added call_if_valid() 2020-09-05 17:47:31 -04:00
Adrià Arrufat e7ec6b7777
Add visitor to remove bias from bn_ layer inputs (#closes 2155) (#2156)
* add visitor to remove bias from bn_ inputs (#closes 2155)

* remove unused parameter and make documentation more clear

* remove bias from bn_ layers too and use better name

* let the batch norm keep their bias, use even better name

* be more consistent with impl naming

* remove default constructor

* do not use method to prevent some errors

* add disable bias method to pertinent layers

* update dcgan example

- grammar
- print number of network parameters to be able to check bias is not allocated
- at the end, give feedback to the user about what the discriminator thinks about each generated sample

* fix fc_ logic

* add documentation

* add bias_is_disabled methods and update to_xml

* print use_bias=false when bias is disabled
2020-09-02 21:59:19 -04:00
Davis King ed22f0400a Make dnn_trainer use robust statistic to determine if the loss is exploding and if it should backtrack.
Previously we used only the non-robust version, and so would mistakenly
not catch sequenes of loss increase that begin with an extremely large
value and then settled down to still large but less extreme values.
2020-09-02 21:48:30 -04:00
Davis King 0bb6ce36d8 dnn_trainer prints number of steps executed when print to ostream 2020-09-02 21:47:58 -04:00
Davis King 76cc8e3b6b Add probability_values_are_increasing() and probability_values_are_increasing_robust() 2020-09-02 21:42:44 -04:00
Davis King c14ba4847e Rename POSIX macro to DLIB_POSIX to avoid name clashes with some libraries. 2020-09-01 09:30:52 -04:00
Davis King 4b92804dc2 Use the box with bounding box regression applied to do NMS in the loss. 2020-09-01 06:58:35 -04:00
Davis King 0e721e5cae Fix bug in bounding box regression loss. 2020-08-29 09:09:54 -04:00
Adrià Arrufat c9809e067f
Add missing input/output mappings to mult_prev (#2154) 2020-08-28 23:04:24 -04:00
Davis King b401185aa5 Fix a warning and add some more error handling. 2020-08-23 22:22:40 -04:00
Adrià Arrufat dd06c1169b
loss multibinary log (#2141)
* add loss_multilabel_log

* add alias template for loss_multilabel_log

* add missing assert

* increment truth iterator

* rename loss to loss_multibinary_log

* rename loss to loss_multibinary_log

* explicitly capture dims in lambda
2020-08-23 22:15:16 -04:00
Juha Reunanen d7ca478b79
Problem: With certain batch size / device count combinations, batches were generated with size = 1, causing problems when using batch normalization. (#2152)
Solution: Divide the mini-batch more uniformly across the different devices.
2020-08-20 07:43:14 -04:00
Davis King bea99ceed0 switch to a name less likely to conflict with third party code 2020-08-19 19:48:14 -04:00
Juha Reunanen a9592b07fd
Minor typo fixes (#2150) 2020-08-19 19:38:35 -04:00
samaldana 2a870e329c
Fix warning for zero variadic macro arguments. (#2151)
When consuming dlib headers and building using gcc/clang with flags
'-Werror -Wpedantic', any inclusion involving DLIB_CASSERT triggers
a compilation error: ISO C++11 requires at least one argument for the
"..." in a variadic macro

Co-authored-by: Samuel Aldana <samuel.aldana@cognex.com>
2020-08-19 19:37:57 -04:00
pfeatherstone f3b4fc548d
Added "get_random_complex_gaussian" to dlib::rand (#2149)
* Added a function for computing a gaussian distributed complex number. The real version is adapted to use the complex version

* Missing header

* missed std:: I was too quick

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-08-17 19:15:53 -04:00
Davis King f55a1a51a0 fix python code index page.
The recent change to use a dlib/__init__.py file instead of the dlib.so file directly messed it up.
2020-08-13 09:00:27 -04:00
Davis King 59b44849bd fix typo, doesn't really matter, but still 2020-08-13 07:47:59 -04:00
Davis King 02e70ce3ca Record last changeset and set PATCH version to 99 2020-08-08 15:30:37 -04:00
Davis King 9117bd7843 Created release v19.21 2020-08-08 15:26:07 -04:00
Davis King 2e64bdd449 update docs 2020-08-08 15:25:53 -04:00
Davis King 2c70aad12c Use a cache to avoid calls to the cuDNN algorithm selection routines. 2020-08-07 16:24:28 -04:00
Davis King 8910445a7a fix some spelling and grammar errors 2020-08-07 15:41:42 -04:00
Davis King 4721075314 More optimization unit tests 2020-08-07 09:57:12 -04:00
Davis King a9d554a4ac minor cleanup 2020-08-05 08:13:58 -04:00
yuriio ff3023f266
Added possibility to load PNG images from a data buffer. (#2137)
* Added possibility to load PNG images from a data buffer.

* Fixed code not compiling with some versions of libpng that doesn't have const specifier.

* Used FileInfo struct as a single parameter for the read_image method.
2020-08-05 08:11:46 -04:00
Davis King c90362d852 updated release notes 2020-08-02 08:20:44 -04:00
Davis King 7b564927d6 Switching to what is hopefully a better fix for the following CUDA error
error: calling a constexpr host function("log1p") from a device function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

The error only happens with some versions of CUDA.
2020-08-01 13:48:30 -04:00
Davis King f8cfe63904 Avoid unnecessairly asking cuDNN which algorithms to use, since this is slow in cuDNN 8.0 2020-08-01 13:45:38 -04:00
Davis King 6c3243f766 Cleanup cuDNN conv algorithm selection code slightly by moving it into its own function. 2020-08-01 13:33:39 -04:00
Davis King 4d18e0d0c7 oops, fixing a weird typo 2020-07-26 15:13:20 -04:00
Davis King 3400e163e8 tweaked cca test thresholds to avoid false positives 2020-07-26 12:43:21 -04:00
Davis King 943408d2d2 Allow forwarding initial function evaluations into find_max_global() 2020-07-26 12:43:21 -04:00
Davis King 5a80ca9e5f Apply --expt-relaxed-constexpr to all older versions of cuda. 2020-07-24 23:50:22 -04:00
jbfove 5650ce45a1
Fix restoration of MSVC warnings in public headers (#2135)
Previously they were restored to default values, which had the effect of negating the current setting of the calling code (whether set in the compiler options or by pragma previously)
2020-07-22 06:07:49 -04:00
Davis King 23b9abd07a Switch cuda target architecture from sm_30 to sm_50. I.e. Maxwell instead of Kepler. 2020-07-11 21:07:36 -04:00
stoperro a2498dc47c
Additional documentation for failed dlib::layer<> use. (#2118) 2020-06-28 11:35:15 -04:00