Commit Graph

11 Commits

Author SHA1 Message Date
Adria Arrufat e5b2cedff8
Improve the data augmentation in the SSL example (#2684)
I was using the data augmentation recommended for the ImageNet dataset, which is not well suited
for CIFAR-10.
After doing so, the test accuracy increased by 1 point.
2022-11-09 22:07:00 -05:00
Adria Arrufat bdb1089ae6
Fix computation of the Barlow Twins loss gradient (#2680) 2022-11-02 07:55:58 -04:00
Adria Arrufat 7f06f6e185
Fix empirical cross-correlation computation in the SSL example (#2679)
I was using the normalized features za as both matrices, instead of za and zb.
I noticed this because the empirical cross-correlation matrix was symmetrical,
which is not supposed to.  It does not affect anything, as it was computed
properly in the loss.
2022-10-31 19:52:24 -04:00
Adrià Arrufat bf273a8c2e
Add multiclass SVM trainer to svm/auto.h (#2642)
* Add multiclass SVM trainer to svm/auto.h

* Use a matrix<double> and add an overload for matrix<float>

* Replace typedef with using and use normalizer from normalized_function

* Remove extra ;

* use better names for decision function variables

* fix comments format and grammar

* remove unneeded includes

* Update dlib/svm/auto_abstract.h

* Update the assert to use 3 samples (as there is 3 fold CV)

* Remove unneeded captures in lambda

* Update dlib/svm/auto_abstract.h

* Update dlib/svm/auto_abstract.h

Co-authored-by: Davis E. King <davis685@gmail.com>
2022-08-17 19:29:04 -04:00
Adrià Arrufat 83ec371f12
Use only a fraction of labels for the multiclass SVM in SSL example (#2641)
* Use only a fraction of labels for the multiclass SVM in SSL example

This change makes the self-supervised example more close to reality:
usually, only a fraction of the dataset is labeled, but we can harness
the whole dataset by using a self-supervised method and then train the
classifier using the fraction of labeled data.

Using 10% of the labels results in a test accuracy of 87%, compared to
the 89% we got when training the multiclass SVM with all labels.

I just added an option to change the fraction of labeled data, so that
users can experiment with it.

* Update examples/dnn_self_supervised_learning_ex.cpp

Co-authored-by: Davis E. King <davis685@gmail.com>
2022-08-14 08:27:49 -04:00
Adrià Arrufat 69665eb0f7
Modernize rounding and cast statements (#2633)
* Use add_compile_definitions, enable -Wpedantic and use colors

* Use lround in rectangle and drectangle

* Use round in bigint

* Use round in canvas_drawing

* Modernize image_transforms

* Modernize image_pyramid

* Fix error in image_pyramid

* Modernize matrix

* Fix error in image_pyramid again

* Modernize fhog test

* Modernize image_keypoint/surf

* Remove extra ;
2022-08-04 18:36:12 -04:00
Adrià Arrufat ad06471a15
Fix typo in the self-supervised learning example (#2623) 2022-07-13 18:54:10 -04:00
Adrià Arrufat 50b78da53a
Fix Barlow Twins loss gradient (#2518)
* Fix Barlow Twins loss gradient

* Update reference test accuracy after fix

* Round the empirical cross-correlation matrix

Just a tiny modification that allows the values to actually reach 255 (perfect white).
2022-02-21 08:33:21 -05:00
Adrià Arrufat e1ac0b43e4
normalize samples for SVM classifier (#2460) 2021-11-17 08:14:39 -05:00
Adrià Arrufat 5091e9c880
Replace sgd-based fc classifier with svm_multiclass_linear_trainer (#2452)
* Replace fc classifier with svm_multiclass_linear_trainer

* Mention about find_max_global()

Co-authored-by: Davis E. King <davis@dlib.net>

* Use double instead of float for extracted features

Co-authored-by: Davis E. King <davis@dlib.net>

* fix compilation with double features

* Revert "fix compilation with double features"

This reverts commit 76ebab4b91.

* Revert "Use double instead of float for extracted features"

This reverts commit 9a50809ebf.

* Find best C using global optimization

Co-authored-by: Davis E. King <davis@dlib.net>
2021-11-06 18:33:31 -04:00
Adrià Arrufat 2e8bac1915
Add dnn self supervised learning example (#2434)
* wip: loss goes down when training without a dnn_trainer

if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)

* remove commented code

* fix gradient computation (hopefully)

* fix loss computation

* fix crash in input_rgb_image_pair::to_tensor

* fix alias tensor offset

* refactor loss and input layers and complete the example

* add more data augmentation

* add documentation

* add documentation

* small fix in the gradient computation and reuse terms

* fix warning in comment

* use tensor_tools instead of matrix to compute the gradients

* complete the example program

* add support for mult-gpu

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/loss_abstract.h

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* [TYPE_SAFE_UNION] upgrade (#2443)

* [TYPE_SAFE_UNION] upgrade

* MSVC doesn't like keyword not

* MSVC doesn't like keyword and

* added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types

* - didn't need is_void anymore
- added result_of_t
- didn't really need ostream_helper or istream_helper
- split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)

* - updated abstract file

* - added get_type_t
- removed deserialize_helper dupplicate
- don't use std::decay_t, that's c++14

* - removed white spaces
- don't need a return-statement when calling apply_to_contents_impl()
- use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum

* - added type_safe_union_size
- added type_safe_union_size_v if C++14 is available
- added tests for above

* - test type_safe_union_size_v

* testing nested unions with visitors.

* re-added comment

* added index() in abstract file

* - refactored reset() to clear()
- added comment about clear() in abstract file
- in deserialize(), only reset the object if necessary

* - removed unecessary comment about exceptions
- removed unecessary // -------------
- struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
- get_type and get_type_t are private. Client code shouldn't need this.
- shuffled some functions around
- type_safe_union_size and type_safe_union_size_v are removed. not needed
- reset() -> clear()
- bug fix in deserialize() index counts from 1, not 0
- improved the abstract file

* refactored index() to get_current_type_id() as per suggestion

* maybe slightly improved docs

* - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
- apply_to_contents() now always calls visit()

* example with private visitor using friendship with non-void return types.

* Fix up contracts

It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
- helper_copy -> helper_forward
- added validate_type<T> in a couple of places

* - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
- use enable_if<is_valid<T>> in favor of validate_type<T>()

* - use enable_if<is_valid<T>> in favor of validate_type<T>()

* - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust

Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>

* Just minor cleanup of docs and renamed some stuff, tweaked formatting.

* fix spelling error

* fix most vexing parse error

Co-authored-by: Davis E. King <davis@dlib.net>
Co-authored-by: pfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
2021-10-29 22:26:38 -04:00