* add cuda test for loss_binary_log_per_pixel and some needed refactoring
* add cuda test for loss_multiclass_log_per_pixel
* forgot to add cpu version in loss
* remove a line I added by mistake
* fix typos
* declare label_to_ignore as static
* use tensor_index function instead of index method
* test cuda and cpu gradients values
* use DLIB_TEST instead of DLIB_CASSERT
* add cuda implementation for loss_multiclass_log_per_pixel_weighted
* add test for cuda and cpu implementations
* fix comment
* move weighted label to its own file
* Update path in doc
Co-authored-by: Davis E. King <davis685@gmail.com>
* [DLIB] extended proxy objects to work with strinstream, istringstream, ostringstream and vector<char>
* [DLIB] - use std::istream and std::ostream instead of std::istringstream, std::ostringstream and std::stringstream.
- put back the filename member variable for better error messages
* [DLIB] - review requirement
Co-authored-by: pf <pf@pf-ubuntu-dev>
* [DLIB] added seekpos and seekoff functions. These are necessary for functions in iostream base class to work properly. e.g. seekg. Note that in seekoff, you do NOT want to check the validity of read_pos after it has been updated. dlib::vectorstream and std::iostream work together to set EOF and/or badbit. Doing something like seekg(10000) should not throw even if the underlying buffer has 2 bytes. You should check if EOF is set and possibly call clear(). We have removed seekg from dlib::vectorstream as this adds confusion. Now std::iostream::seekg is called which somewhere down the callstack will call seekpos and/or seekoff. So there should be no diverging functionality between calling seekg on dlib::vectorstream& or std::iostream& when there is a cast.
* [DLIB] vectorstream unit test is updated to run identical tests on dlib::vectorstream& and std::iostream&
* [DLIB] only support read pointers and delete copy and move semantics
* [DLIB] explicit tests for seekg() in different directions
* [DLIB] - no need to delete the move constructor and move assign operator. This is implicitly done by deleting the copy constructor and copy assign operator.
* [DLIB] - remove leftover comments. no need
- use more idiomatic notation
Co-authored-by: pf <pf@pf-ubuntu-dev>
Now the user doesn't have to supply a visitor capable of visiting all
layers, but instead just the ones they are interested in. Also added
visit_computational_layers() and visit_computational_layers_range()
since those capture a very common use case more concisely than
visit_layers(). That is, users generally want to mess with the
computational layers specifically as those are the stateful layers.
* add visitor to remove bias from bn_ inputs (#closes 2155)
* remove unused parameter and make documentation more clear
* remove bias from bn_ layers too and use better name
* let the batch norm keep their bias, use even better name
* be more consistent with impl naming
* remove default constructor
* do not use method to prevent some errors
* add disable bias method to pertinent layers
* update dcgan example
- grammar
- print number of network parameters to be able to check bias is not allocated
- at the end, give feedback to the user about what the discriminator thinks about each generated sample
* fix fc_ logic
* add documentation
* add bias_is_disabled methods and update to_xml
* print use_bias=false when bias is disabled
Previously we used only the non-robust version, and so would mistakenly
not catch sequenes of loss increase that begin with an extremely large
value and then settled down to still large but less extreme values.
* add loss_multilabel_log
* add alias template for loss_multilabel_log
* add missing assert
* increment truth iterator
* rename loss to loss_multibinary_log
* rename loss to loss_multibinary_log
* explicitly capture dims in lambda
When consuming dlib headers and building using gcc/clang with flags
'-Werror -Wpedantic', any inclusion involving DLIB_CASSERT triggers
a compilation error: ISO C++11 requires at least one argument for the
"..." in a variadic macro
Co-authored-by: Samuel Aldana <samuel.aldana@cognex.com>
* Added a function for computing a gaussian distributed complex number. The real version is adapted to use the complex version
* Missing header
* missed std:: I was too quick
Co-authored-by: pf <pf@pf-ubuntu-dev>
* Added possibility to load PNG images from a data buffer.
* Fixed code not compiling with some versions of libpng that doesn't have const specifier.
* Used FileInfo struct as a single parameter for the read_image method.
error: calling a constexpr host function("log1p") from a device function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
The error only happens with some versions of CUDA.