Commit Graph

4 Commits

Author SHA1 Message Date
Adrià Arrufat 631c47c7fd
fix letterbox_image in yolo example (#2911) 2024-02-05 08:07:16 -05:00
Adrià Arrufat 19a952c3a4
extend letterbox behavior (#2899)
* extend letterbox behavior

* simplify scale logic and update docs

* oops, forgot one line in the yolo example

* make dpoint const
2023-12-07 22:22:05 -05:00
Adrià Arrufat adca7472df
Add support for fused convolutions (#2294)
* add helper methods to implement fused convolutions

* fix grammar

* add method to disable affine layer and updated serialization

* add documentation for .disable()

* add fuse_convolutions visitor and documentation

* update docs: net is not constant

* fix xml formatting and use std::boolalpha

* fix warning and updated net requirement for visitor

* fix segfault in fuse_convolutions visitor

* copy unconditionally

* make the visitor class a friend of the con_ class

* setup the biases alias tensor after enabling bias

* simplify visitor a bit

* fix comment

* setup the biases size, somehow this got lost

* copy the parameters before resizing

* remove enable_bias() method, since the visitor is now a friend

* Revert "remove enable_bias() method, since the visitor is now a friend"

This reverts commit 35b92b1631.

* update the visitor to remove the friend requirement

* improve behavior of enable_bias

* better describe the behavior of enable_bias

* wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias

* wip: fix cpu compilation

* WIP: not working fused ReLU

* WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)

* WIP: more general set of 4d tensor (still not working)

* fused convolutions seem to be working now, more testing needed

* move visitor to the bottom of the file

* fix CPU-side and code clean up

* Do not try to fuse the activation layers

Fusing the activation layers in one cuDNN call is only supported when using
the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
behavior. So, let's just fuse the batch norm and the convolution into one
cuDNN call using the IDENTITY activation function.

* Set the correct forward algorithm for the identity activation

Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward

* move the affine alias template to its original position

* wip

* remove unused param in relu and simplify example (I will delete it before merge)

* simplify conv bias logic and fix deserialization issue

* fix enabling bias on convolutions

* remove test example

* fix typo

* update documentation

* update documentation

* remove ccache leftovers from CMakeLists.txt

* Re-add new line

* fix enable/disable bias on unallocated networks

* update comment to mention cudnnConvolutionBiasActivationForward

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>

* Apply documentation suggestions from code review

Co-authored-by: Davis E. King <davis@dlib.net>

* update affine docs to talk in terms of gamma and beta

* simplify tensor_conv interface

* fix tensor_conv operator() with biases

* add fuse_layers test

* add an example on how to use the fuse_layers function

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>
2021-10-11 10:48:56 -04:00
Adrià Arrufat 16500906b0
YOLO loss (#2376) 2021-07-29 20:05:54 -04:00