Davis King
c3fa856c9f
Fixed grammar
2017-08-22 22:02:19 -04:00
Davis King
ebb0f85fac
Fixed incorrect size() for simd8i.
2017-08-21 21:32:36 -04:00
Davis King
d92728b334
Made windows testing bat file use rmdir rather than rm since rm isn't always available on windows.
2017-08-21 21:27:38 -04:00
Davis King
f566b05a2e
Fixed linker errors when building pyhton on windows. This fixes a bug that was introduced in a recent PR.
...
Also fixed compiler errors that occurred in visual studio.
2017-08-21 21:27:12 -04:00
Davis King
05c8391cb2
Fixed warning in visual studio.
2017-08-21 19:35:42 -04:00
Davis King
1db46949ee
Made the test loss in the verbose output messages from the dnn_trainer not jump
...
in variance when the learning rate resets.
2017-08-20 20:47:00 -04:00
Davis King
d009916e76
Added serialization support for the running_stats_decayed object.
2017-08-20 20:39:44 -04:00
Davis King
dc45871a31
Made the loss value management a little more conservative.
2017-08-20 20:08:03 -04:00
Davis King
dc071ceae1
Made the input_tensor_to_output_tensor() and output_tensor_to_input_tensor()
...
coordinate mappings work on networks that contain skip layers.
2017-08-20 19:42:12 -04:00
Davis King
620178db05
Changed the default get_test_iterations_without_progress_threshold() from 200
...
to 500. Now that we have a better history management of loss values in the
trainer it's much more sensible to have a larger value here.
2017-08-20 19:30:11 -04:00
Davis King
dd62b0e2ff
Made the dnn_trainer not forget all the previous loss values it knows about
...
when it determines that there have been a lot of steps without progress and
shrinks the learning rate. Instead, it removes only the oldest 100. The
problem with the old way of removing all the loss values in the history was
that if you set the steps without progress threshold to a really high number
you would often observe that the last few learning rate values were obviously
not making progress, however, since all the previous loss values were forgotten
the trainer needed to fully populate it's loss history from scratch before it
would figure this out. This new style makes the trainer not waste time running
this excessive optimization of obviously useless mini-batches.
2017-08-20 19:28:08 -04:00
Davis King
618f1084d2
The input_rgb_image_sized is supposed to be convertible to input_rgb_image,
...
which it was in all ways except you couldn't deserialize directly like you
would expect. This has now been fixed.
2017-08-20 07:14:40 -04:00
Davis King
ba430be591
Make DLIB_ASSERT statements not abort the python interpreter, but just trigger an exception.
2017-08-19 08:48:48 -04:00
Davis King
33513abdeb
Suppress compiler warning
2017-08-19 08:42:17 -04:00
Davis King
3156a5f440
merged
2017-08-18 21:24:49 -04:00
Adam Geitgey
b6d2329c5e
Add a python wrapper for using the mmod face detector ( #753 )
2017-08-18 16:30:33 -04:00
Davis King
2241e2c210
merged
2017-08-18 05:31:14 -04:00
Davis King
98cf44e8e3
Changed the random_cropper so that it samples background patches uniformly
...
across scales regardless of the input image size. Previously, if you gave
really large images or really small images it had a bias towards giving only
large patches or small patches respectively.
2017-08-18 05:30:11 -04:00
Davis King
0863386849
merged
2017-08-15 15:52:16 -04:00
Davis King
48a56f36ab
Fixed spelling error
2017-08-15 15:52:00 -04:00
Davis King
96619c859f
merged
2017-08-15 09:32:26 -04:00
Davis King
4541a1b90a
Fixed grammar
2017-08-14 21:36:18 -04:00
Davis King
d6a1e273c0
Updated solvers to correctly pull in cont_'s bias parameter multipliers.
2017-08-14 14:04:16 -04:00
Davis King
9043c7415e
Added extract_ layer
2017-08-14 13:50:47 -04:00
Davis King
525cfc71af
Added more tests for copy_tensor()
2017-08-14 12:48:37 -04:00
Davis King
7078cfaff5
Added an "add_to" option to tt:copy_tensor(). There was also a bug in the
...
concat layer's backward() method. It was assigning the gradient to previous
layers instead of adding the gradient, as required by the layer interface
specification. This change also noticeably speeds up concat layers since only
one CUDA kernel launch now happens per concat operation, rather than one
kernel launch for each sample in a tensor.
2017-08-14 12:28:26 -04:00
Davis King
89c9267e46
Made copy_tensor() use cudaMemcpyAsync() rather than cudaMemcpy().
2017-08-14 09:52:53 -04:00
Davis King
aafa411672
Added mult_prev layer.
2017-08-11 17:47:19 -04:00
Davis King
f7310f4bbc
Added multiply_zero_padded()
2017-08-11 16:39:00 -04:00
Davis King
46a02d9447
Made swig always run when you rebuild to avoid stale swig outputs.
2017-08-10 17:05:35 -04:00
Davis King
4ab360e439
Added DLIB_NO_ABORT_ON_2ND_FATAL_ERROR for dlib::fatal_error as a generic
...
switch for use in plugin environments.
2017-08-10 16:07:42 -04:00
Davis King
2630029cb4
Fixed missing java:: qualifiers.
2017-08-10 16:07:12 -04:00
Davis King
e99d47cc7c
Removed exit call on load library failure.
2017-08-10 15:31:02 -04:00
Davis King
44a62b19d0
A bit of path cleanup
2017-08-10 15:16:21 -04:00
Davis King
8f3249a438
merged
2017-08-10 14:38:59 -04:00
Davis King
ae13ad800e
Added more options for controlling the install folder paths.
2017-08-10 14:38:41 -04:00
Davis King
af88b0d56f
merged
2017-08-09 21:42:29 -04:00
Davis King
d9f93548dd
Fixed a bug in the warning message about NMS overlap where it would sometimes false alarm.
2017-08-09 21:42:14 -04:00
Davis King
d62c5a7c8e
merged
2017-08-09 12:20:06 -04:00
Davis King
cf24f02507
Added an object that lets you hold a copyable reference to a java array. Also
...
renamed the objects and generally improved documentation.
2017-08-09 12:19:41 -04:00
Davis King
fa5c666b6e
Added an overload of mat() that takes a row stride value.
2017-08-08 15:10:17 -04:00
Davis King
594df9a66d
Relaxes test to avoid false alarms.
2017-08-08 15:00:38 -04:00
Davis King
420eba0e6a
Added note about logging training parameters.
2017-08-06 11:34:07 -04:00
Davis King
0f2753b754
Changed how we print the network hash.
2017-08-06 08:54:10 -04:00
Davis King
7b26b2d1ff
Made dnn_trainer print the network size when logged to an iostream.
2017-08-05 15:24:47 -04:00
Davis King
8b21c89efd
Improved how the relaxed mmod overlap settings are determined.
2017-08-04 22:46:46 -04:00
Davis King
ed683785ce
Added get_synchronization_file() and get_test_one_step_calls() to dnn_trainer.
...
Also added an operator<< for dnn_trainer that prints the parameters it's using.
These changes also break backwards compatibility with the previous
serialization format for dnn_trainer objects.
2017-08-03 15:55:57 -04:00
Davis King
9540ca23ec
Added operator<< for the DNN solvers.
2017-08-03 15:47:08 -04:00
Davis King
aafde20607
Added operator<< for random_cropper.
2017-08-03 15:40:51 -04:00
Davis King
a7d1dc474b
Filled out the options for loss_mmod's operator<<.
2017-08-03 15:28:26 -04:00