Commit Graph

7049 Commits

Author SHA1 Message Date
Davis King e7e5d23802 Fixed and clarified spec 2017-11-23 07:59:10 -05:00
Davis King 66a5a9c407 merged 2017-11-22 13:13:57 -05:00
Davis King 9a8f312127 Made the loss dumping between learning rate changes a little more relaxed. In
particular, rather than just dumping exactly 400 of the last loss values, it
now dumps 400 + 10% of the loss buffer.  This way, the amount of the dump is
proportional to the steps without progress threshold.  This is better because
when the user sets the steps without progress to something larger it probably
means you need to look at more loss values to determine that we should stop,
so dumping more in that case ought to be better.
2017-11-22 13:06:19 -05:00
hannometer 2f531f1175 imglab: select next/previous image with 's' and 'w' (#964)
* imglab: select next/previous image with 's' and 'w'

* imglab: make 'w' and 's' keys behave like UP and DOWN keys; add 'about' text
2017-11-22 05:42:15 -05:00
Davis King 2b8becae97 Added global_function_search unit test 2017-11-20 21:36:04 -05:00
Davis King c91a047327 more cleanup 2017-11-20 21:13:49 -05:00
Davis King 1eee5ccdc0 Cleanup 2017-11-20 19:55:07 -05:00
Davis King f924b44831 updated docs 2017-11-20 19:37:50 -05:00
Davis King af99cde18a Cleaned up the compile_time_integer_list and make_compile_time_integer_range
types and put them into their own file.
2017-11-20 19:31:20 -05:00
Davis King 4d5c77cacb merged 2017-11-20 18:48:45 -05:00
Davis King 29b5b286d7 Changed python build script to append rather than overwrite CMAKE_PREFIX_PATH. 2017-11-20 18:48:14 -05:00
Kino 3d62b85c50 generic image all the way - colormaps.h (#971)
Still hunting down pre-generic_image implementations
2017-11-20 07:52:24 -05:00
Davis E. King 9b51351bbf
Update ISSUE_TEMPLATE.md 2017-11-20 05:40:24 -05:00
Davis King a930fe8096 merged 2017-11-19 13:15:15 -05:00
Juha Reunanen c0b7bf9e6c Problem: log loss may become infinite, if g[idx] goes zero (#938)
* Problem: log loss may become infinite, if g[idx] goes zero
Solution: limit the input of the log function to 1e-6 (or more)

* Parameterize the safe_log epsilon limit, and make the default value 1e-10
2017-11-19 10:53:50 -05:00
Davis King 5d259cd7c1 merged 2017-11-18 21:42:12 -05:00
Davis King 532e2a3e61 Made the global_function_search object use the faster incremental
upper_bound_function::add().  Also fixed some issues in the solver.
2017-11-18 21:40:44 -05:00
Davis E. King 24c0628532
Update ISSUE_TEMPLATE.md 2017-11-18 20:29:29 -05:00
Davis E. King 02e989d8f7
Update ISSUE_TEMPLATE.md 2017-11-18 20:28:10 -05:00
Davis King f9f69185af Testing to see if github updates 2017-11-18 20:27:27 -05:00
Davis E. King a6f9bd620b
Update ISSUE_TEMPLATE.md 2017-11-18 20:25:08 -05:00
Davis E. King c8c138c11c
Update ISSUE_TEMPLATE.md 2017-11-18 20:22:38 -05:00
Davis E. King b93070826b
Update ISSUE_TEMPLATE.md 2017-11-18 20:22:02 -05:00
Davis E. King c4a172fb75
Create ISSUE_TEMPLATE.md 2017-11-18 20:21:18 -05:00
Davis King 5529ddfb82 Added a .add() to upper_bound_function so that the upper bound can be quickly
updated without needing to resolve the whole QP.
2017-11-17 21:42:29 -05:00
Davis King 7e39a52758 merged 2017-11-17 20:56:51 -05:00
Davis King f1fe908a9d Added loss_dot layer 2017-11-17 19:07:41 -05:00
Davis King a02208018b Upgraded the con_ layer so that you can set the nr or nc to 0 in the layer
specification and this means "make the filter cover the whole input image
dimension".  So it's just an easy way to make a filter sized exactly so that it
will have one output along that dimension.
2017-11-17 16:55:24 -05:00
Davis King 692ddb8c18 merged 2017-11-17 11:46:13 -05:00
Davis King e7d713cfee Added softmax_all layer. 2017-11-17 11:40:44 -05:00
Kino 364e97c159 generic_image all the way (#921)
* generic_image all the way

tried to hunt down and correct the functions that were using a
non-generic_image approach to dlib’s generic images.

* generic image fix fix

Had to change a couple of const_image_view to non-const versions so array access is possible in the rest of the code

* same

same

* back to sanity
2017-11-17 06:41:55 -05:00
Amin Cheloh 1798e8877c Update dnn_mmod_find_cars2_ex.cpp (#966) 2017-11-17 06:38:48 -05:00
Davis King 11145541ea Work around visual studio's lack of C++11 support. 2017-11-16 21:45:43 -05:00
Davis King f093d2cc5a Made normalization code more robust and a bit cleaner. 2017-11-15 17:20:36 -05:00
Davis King 9f6ad63b06 Removed unneeded assert 2017-11-15 09:24:50 -05:00
Davis King b84e2123d1 Changed network filename to something more descriptive. 2017-11-15 07:10:50 -05:00
Davis King 36392bb2c3 Minor tweaks to spec 2017-11-15 07:06:33 -05:00
Davis King 483e6ab4cc merged 2017-11-15 07:04:16 -05:00
Juha Reunanen e48125c2a2 Add semantic segmentation example (#943)
* Add example of semantic segmentation using the PASCAL VOC2012 dataset

* Add note about Debug Information Format when using MSVC

* Make the upsampling layers residual as well

* Fix declaration order

* Use a wider net

* trainer.set_iterations_without_progress_threshold(5000); // (was 20000)

* Add residual_up

* Process entire directories of images (just easier to use)

* Simplify network structure so that builds finish even on Visual Studio (faster, or at all)

* Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle

* Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call

* Review fix: remove the batch normalization layer from right before the loss

* Review fix: point out that only the Visual C++ compiler has problems.
Also expand the instructions how to run MSBuild.exe to circumvent the problems.

* Review fix: use dlib::match_endings

* Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from.

* Review fix: make formatting comply with dlib style conventions.

* Review fix: output training parameters.

* Review fix: remove #ifndef __INTELLISENSE__

* Review fix: use std::string instead of char*

* Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter

* Fix whitespace formatting

* Add more comments

* Fix finding image files for inference

* Resize inference test output to the size of the input; add clarifying remarks

* Resize net output even in calculate_accuracy

* After all crop the net output instead of resizing it by interpolation

* For clarity, add an empty line in the console output
2017-11-15 07:01:52 -05:00
Sebastian Höffner 1e809b18df Adding missing implementation of tabbed_display::selected_tab (#957) 2017-11-14 15:24:39 -05:00
Davis King 391c11ed3b merged 2017-11-14 06:17:28 -05:00
Sean Warren acda88a5e3 Determine lapack fortran linking convention in CMake (#934)
* Determine lapack fortran linking convention in CMake

Looks for lapack function with and without trailing underscore - allows use of CLAPACK on windows where functions are decorated but fortran_id.h otherwise assumes they are not

* Use enable_preprocessor_switch for LAPACK decoration detection

* Add lapack decoration defines to config.h.in

* Use correct variable for lapack_libraries
2017-11-14 05:57:00 -05:00
Davis King 9e290f65ea merged 2017-11-13 22:42:37 -05:00
Davis King c6171cbf26 Added tools for doing global optimization. The main new tools here are
find_global_maximum() and global_function_search.
2017-11-13 22:41:07 -05:00
Davis King c5c3518ac0 Made bigint use explicit relational operator functions rather than the overly
general templates in dlib::relational_operators.  I did this because the
templates in dlib::relational_operators sometimes cause clashes with other
code in irritating ways.
2017-11-13 22:38:38 -05:00
Davis King 02fbcede93 Added missing #include 2017-11-13 05:56:57 -05:00
Davis King 61b6c1ff78 Added solve_trust_region_subproblem_bounded() 2017-11-12 15:16:25 -05:00
Davis King 1e90fc6dbd Added upper_bound_function object. 2017-11-12 14:18:34 -05:00
Davis King 1bfc31dec9 Reduce parallelism in build to avoid travis running out of RAM. 2017-11-12 07:14:48 -05:00
Davis King 6acebf5ec4 Fixed deserialize() 2017-11-11 08:42:16 -05:00