Commit Graph

8160 Commits

Author SHA1 Message Date
Davis King 7750a6348f produce cleaner python source distribution tarballs 2021-08-14 10:03:38 -04:00
Adrià Arrufat fe0957303f
Add progress information to console_progress_indicator (#2411)
* add progress information (current/total and percent)

* print a new line instead of overwritting with spaces

* check if target_val is an integer with std::trunc
2021-08-06 07:32:53 -04:00
Adrià Arrufat 74653b4f26
Add function to compute string dimensions in pixels (#2408)
* add function to compute string dimensions in pixels

* use custom struct as a return value, remove first and last params

* Update dlib/image_transforms/draw_abstract.h

Co-authored-by: Davis E. King <davis@dlib.net>
2021-08-05 19:24:12 -04:00
Adrià Arrufat cd915b037d update pngligconf.h 2021-08-05 19:21:13 -04:00
Adrià Arrufat f019b7adcf Minor changes to avoid conflicts and warnings in visual studio. 2021-08-05 19:21:13 -04:00
Davis King ca3a0fdd5e normalized line endings so visual studio won't complain. 2021-08-05 19:21:13 -04:00
Davis King fdf6902ade Another minor thing to avoid warnings from visual studio. 2021-08-05 19:21:13 -04:00
Davis King 04816ec0fb Added missing #include (needed only to avoid gcc warnings) 2021-08-05 19:21:13 -04:00
Adrià Arrufat 11101b6f4b update libpng to version 1.6.37 2021-08-05 19:21:13 -04:00
Adrià Arrufat 23e506323a update zlib to version 1.2.11 2021-08-05 19:21:13 -04:00
Adrià Arrufat bec25d8247
Fix running gradient crashing sometimes (#2401) 2021-08-04 06:59:42 -04:00
Adrià Arrufat 16500906b0
YOLO loss (#2376) 2021-07-29 20:05:54 -04:00
Adrià Arrufat 951fdd0092
return the projective transform in extract_image_4points (#2395) 2021-07-26 20:46:36 -04:00
Adrià Arrufat b850f0e524
Add LayerNorm documentation (#2393) 2021-07-22 08:00:55 -04:00
Davis King e64ea42f6f remove dead code 2021-07-15 22:29:27 -04:00
frostbane 7d8c6a1141
Fix cannot compile iso only code (#579) (#2384)
* Fix cannot compile iso only code (#579)

also fixing (#1742)

* Remove GUI dependency from fonts (#2273)
2021-06-30 06:43:43 -04:00
Adrià Arrufat 973de8ac73
Fix disable_duplicative_biases when the input is a skip layer (#2367)
* Fix disable_duplicative_biases when the input is a skip layer

* fix template parameters
2021-05-12 07:05:44 -04:00
Adrià Arrufat 4a51017c2e
Make Travis read the CXXFLAGS enviroment variable (#2366)
* try to make sure travis uses C++17

* fix unbound variable

* Update dlib/travis/build-and-test.sh

Co-authored-by: Davis E. King <davis@dlib.net>
2021-05-11 20:08:49 -04:00
Adrià Arrufat b99bec580b
Fix serialize variant with C++17 (#2365)
* Fix serialize variant with C++17

* fix order of parameters
2021-05-11 08:00:02 -04:00
pfeatherstone 9697fa5de2
[SERIALIZATION] support for std::optional (#2364)
* added support for std::optional if using C++

* oops, bug fix + check if item already holds a type

* oops, another bug fix

* remove warnings about unused parameters

Co-authored-by: pf <pf@me>
2021-05-11 07:56:34 -04:00
pfeatherstone 11212a94b4
[SERIALIZATION] added support for std::variant (#2362)
* [SERIALIZATION] addes support for std::variant

* [SERIALIZATION] bug fix + added tests

* support immutable types

* put an immutable type in std::variant

Co-authored-by: pf <pf@me>
2021-05-10 09:04:29 -04:00
Davis King a54507d81b suppress spurious warning 2021-05-01 17:32:53 -04:00
Davis King 273d59435f fix comment formatting 2021-05-01 17:04:59 -04:00
Davis King cd17f324eb fix warnings about possible use of uninitialized values 2021-05-01 17:04:36 -04:00
Davis King 1de47514bd Make input_layer() work with networks that contain repeat layers.
Do this by just making all layers have a .input_layer() method, which in
that context can be implemented in a simple manner.
2021-05-01 14:46:47 -04:00
Davis King ded68b9af7 Cleanup gcc version checking code a little.
Also fix this error from cmake 3.5.1:

```
CMake Error at CMakeLists.txt:62 (if):
  if given arguments:

    "CMAKE_COMPILER_IS_GNUCXX" "AND" "CMAKE_CXX_COMPILER_VERSION" "VERSION_LESS_EQUAL" "4.8.5"

  Unknown arguments specified
```
2021-04-28 08:05:22 -04:00
Adrià Arrufat 8e9755ab0f
do not attempt to build with gcc 4.8.5 or older (#2357)
* do not attempt to build with gcc 4.8.5 or older

* add comment
2021-04-27 07:03:32 -04:00
Davis King 9b502d29a4 Merge branch 'pfeatherstone-allocator_traits' 2021-04-24 10:16:44 -04:00
pfeatherstone a051a65420 std_vector.h : making traits compatible with C++20 2021-04-24 10:16:26 -04:00
pfeatherstone d6d1a9e879
[TYPE_SAFE_UNION] use std::aligned_union instead of stack_based_memory_block (#2349)
* [TYPE_SAFE_UNION] use std::aligned_union instead of stack_based_memory_block. std::aligned_union was specifically designed to do this kind of stuff and we are better off trusting the standard library deciding what the correct storage type should be and what the appropriate alignment should be

* [TYPE_SAFE_UNION] as per Davis' suggestion, std::aligned_union can take Len parameter 0. Also, the content's of validate_type() has been bugging me for ages, so i created is_any which is based on std::is_same. I've also replaced is_same_type with std::is_same

Co-authored-by: Peter Featherstone <peter@grampus-server.com>
2021-04-21 21:57:35 -04:00
chokomancarr 8d4df7c0b3
remove a unicode character (#2347) 2021-04-14 06:51:49 -04:00
Davis King 269a3ed1e6 fix incorrect docs about what gradient is computed 2021-04-13 23:00:44 -04:00
Adrià Arrufat 92106718bf
Make ELU an inplace layer and fix Clipped ReLU implementation (#2346)
* Make ELU an inplace layer

* Fix CUDA implementation of clipped_relu and update tests
2021-04-13 22:58:30 -04:00
Adrià Arrufat 7f53d7feb6
Make clipped-relu inplace and fix docs for elu (#2345) 2021-04-12 21:49:49 -04:00
Adrià Arrufat 1b7c7a6411
Add Clipped ReLU and ELU activations (#2285)
* wip: add apis for clipped_relu and elu, and layer implementation for clipped_relu

* add tensor_tools documentation

* add cpu implementations for new activations

* add elu layer

* use upperbound and lowerbound for clipped_relu

* fix clipped_relu gradient due to wrong variable naming

* fix elu_gradient due to wrong variable naming

* fix elu_gradient documentation

* add documentation

* WIP: add test_layer cases for clipped_relu and elu

For some reason that I can't see, ELU is failing...

* add clipped_relu and elu tests... cuda elu layer still does not work

* fix spacing

* add custom cuda implementation for elu_gradient (this one works)

* Revert "add custom cuda implementation for elu_gradient (this one works)"

This reverts commit 446dd80396.

* Revert "Revert "add custom cuda implementation for elu_gradient (this one works)""

This reverts commit 0b615f5008.

* add comment about custom elu gradient implementation

* add gradient tests, restore cudnn elu gradient

* re add custom elu gradient implementation

* update docs

* use own cuda implementation for clipped_relu and elu

Co-authored-by: Davis E. King <davis@dlib.net>
2021-04-12 07:59:06 -04:00
Adrià Arrufat 0ffe9c4c40
Fix input/output mappings with repeat layers (#2337)
* Fix input/output mappings with repeat layers

* add test for input/output tensor mappers

* fix output to input order
2021-04-04 13:31:08 -04:00
Adrià Arrufat a4713b591f
Add letterbox image (#2335)
* Add letterbox image

* use && instead of and

* make function adhere to the generic image interface

* avoid extra copy

* add some overloads and a simple test

* add documentation

* use zero_border_pixels and remove superfluous temporary image

* allow different input and out images and update docs

* remove empty line

* be more explicit about output image size
2021-04-04 13:27:32 -04:00
Davis King 01e6fd51f9 Record last changeset and set PATCH version to 99 2021-03-28 09:22:07 -04:00
Davis King 70ea028f12 Created release v19.22 2021-03-28 09:17:49 -04:00
Davis King f152a78a56 updated docs 2021-03-28 09:17:30 -04:00
Adrià Arrufat a44ddd7452
Add matrix pointwise_pow (#2329) 2021-03-22 07:37:54 -04:00
Adrià Arrufat 092fa3037f
Add softmax function for matrix type (#2320)
* Add softmax function for matrix type

* make softmax inherit from basic_op_m

* fix comment

* add test for matrix softmax

* remove include

* take inspiration from op_normalize

* use multiplication instead of division

* fix typo in documentation
2021-03-07 22:59:53 -05:00
Davis King 3162f93c5d Revert "Add cmake back as a pip dependency."
This reverts commit 8b9d04390c.

Reverting this because the cmake pip pakage is still busted.  I've been
getting messages from many people about how it's breaking their
systems/installs.
2021-02-20 07:22:58 -05:00
Davis King 0697527acc fix invalid assert 2021-02-16 20:31:55 -05:00
Davis King 8b9d04390c Add cmake back as a pip dependency.
This dependency was explicitly removed two years ago because pip was
installing a broken cmake on some systems.  I'm adding the dependency
back in the hope that the pip copy of cmake has been fixed on all
systems by this point.
2021-02-15 19:57:57 -05:00
pfeatherstone 1b58bdc205
[SERIALIZATION] updated _abstract file (#2306)
Co-authored-by: pf <pf@pf-ubuntu-dev>
2021-02-12 23:42:47 -05:00
pfeatherstone 479b69e688
Serialization to and from vector<int8_t> and vector<uint8_t> (#2301)
* [SERIALIZATION]	- vectorstream can now be used with vector<int8_t> and vector<uint8_t>

* [SERIALIZATION]	- update proxy_serialize and proxy_deserialize to work with vector<int8_t> and vector<uint8_t>

* [SERIALIZATION]	- updated vectorstream tests

* [SERIALIZATION]	- updated serialize tests. check you can go to and from any of vector<char>, vector<int8_t> and vector<uint8_t>

* [SERIALIZATION]	- updated matrix tests. check you can go to and from any of vector<char>, vector<int8_t> and vector<uint8_t>

* [SERIALIZATION]	- updated dnn tests. check you can go to and from any of vector<char>, vector<int8_t> and vector<uint8_t>

* [SERIALIZATION] improved and possibly safer

* [SERIALIZATION] use placement new. best of all worlds i think. we have least object overhead. but code looks a tad uglier. oh well, user doesn't have to care

* [SERIALIZATION] i hope this is easier on the eyes.

Co-authored-by: pf <pf@pf-ubuntu-dev>
2021-02-11 22:13:05 -05:00
Adrià Arrufat 04a3534af1
fix set_learning_rate_multipliers_range not working (#2304)
* fix set_learning_rate_multipliers not working

* add tests for set_learning_rate_multipliers
2021-02-10 21:55:54 -05:00
Adrià Arrufat 42e6ace845
Minor fix in the network size format (#2303)
Since we are dividing by 1024, the unit should be MiB instead of MB.
I also added a space between the number and the unit
2021-02-09 08:07:24 -05:00
Davis King 9f6aefc0db Add missing .get_final_data_gradient() for repeat layer 2021-01-28 08:35:17 -05:00