Davis King
ba59ddc6b5
Added subprocess_stream so that complex things can be isolated from MATLAB's
...
shenanigans in a separate process.
2016-05-31 12:37:25 -04:00
Davis King
0d2bce15ff
Made the mex wrapper trap all std::exception derived exceptions rather than
...
just dlib exceptions.
2016-05-31 12:27:59 -04:00
Davis King
623fba97fe
updated ignore list
2016-05-31 12:27:31 -04:00
Davis King
738b4d36af
Made imglab show the name of the current image in the title bar.
2016-05-31 06:45:02 -04:00
Davis King
6e0f13ba06
minor cleanup
2016-05-30 13:14:04 -04:00
Davis King
b4b9376aab
updated docs
2016-05-30 13:04:23 -04:00
Davis King
f698b85d68
clarified spec
2016-05-30 11:39:16 -04:00
Davis King
f1eae955ac
fixed typo
2016-05-30 09:24:19 -04:00
Davis King
20d10efc65
A little more cleanup in the spec
2016-05-30 09:17:46 -04:00
Davis King
abd0019df0
fixed typo
2016-05-30 08:54:02 -04:00
Davis King
771ca2e0f3
clarified spec
2016-05-30 08:50:49 -04:00
Davis King
53e9c15811
Clarified some parts of the example.
2016-05-30 08:50:28 -04:00
Davis E. King
8c550d4c85
Merge pull request #114 from e-fominov/dnn_group_layer
...
Concat layer
2016-05-30 08:16:53 -04:00
Davis King
cbd37d56a6
Cleaned up the contracts a little.
2016-05-30 07:35:25 -04:00
Davis E. King
7a31806baa
Merge pull request #125 from e-fominov/dnn_trainer_get_step
...
Added getter for trainer::train_one_step_calls
2016-05-30 07:31:17 -04:00
Fm
f06b265b34
Added getter for trainer::train_one_step_calls
2016-05-30 09:25:23 +03:00
Fm
01b3b08be6
Replaced sizeof... with variadic templates
2016-05-29 17:21:42 +03:00
Fm
1974e68d31
Removed friend declaration of dnn_tester from core.h
2016-05-27 14:49:11 +03:00
Fm
d32bcdfa3d
Changed concat syntax into concat1, concat2..., made dtest more readable::
2016-05-27 09:56:00 +03:00
Fm
2f7d3578d2
Added layer access and printing examples to inception sample
2016-05-26 19:40:10 +03:00
Evgeniy Fominov
290b1cb15b
Fixed dnn_tester in GPU mode for cpu_tensor test
2016-05-26 18:26:08 +03:00
Fm
a06e533271
fixed cuda::copy_tensor
2016-05-26 17:51:44 +03:00
Fm
1f0318e222
depth_group replaced with concat layer
2016-05-26 17:43:54 +03:00
Fm
93e786db6c
Merge branch 'master' of https://github.com/davisking/dlib into dnn_group_layer
2016-05-26 17:15:56 +03:00
Davis King
911638638d
Made add_prev output a tensor with dimensions that are the max of each of the
...
dimensions of its inputs rather than always outputting a tensor that has the
dimensions of its immediate predecessors.
2016-05-25 19:12:36 -04:00
Davis King
b9332698fe
updated example
2016-05-23 22:01:47 -04:00
Davis King
e5ad959085
Added bias learning rate and weight decay multipliers to bn_ layers
2016-05-23 22:01:37 -04:00
Davis King
b6b8379819
Relaxed the requirements for calling find_min_box_constrained() and
...
find_max_box_constrained(). Now the bounds can be empty for some variables.
2016-05-23 20:25:43 -04:00
Davis King
974743767f
Changed code to avoid recreating thread_local cuda context objects.
2016-05-23 19:57:53 -04:00
Davis King
e55afabd1a
fixed broken tests
2016-05-23 06:54:55 -04:00
Davis King
1cbf940eb3
Fixed a bug I introduced a minute ago.
2016-05-22 16:30:09 -04:00
Davis King
f189612876
Fixed a bug in visit_layer_parameter_gradients() and visit_layer_parameters()
...
caused by num_computational_layers being wrong when tax layers were placed as
the first layer. These visit functions being wrong also caused multi-GPU
support to not work on such networks.
2016-05-22 16:14:10 -04:00
Davis King
d019e9cd08
Changed the trainer threading code to use dlib::thread_pool instead of
...
std::async() since std::async creates new threads with each invocation, which
in turn causes objects with thread_local storage duration to be reconstructed
each time. This is problematic because CUDA context objects for cublas and
cudnn get reconstructed over and over, slowing things down and generally using
more resources than should be used.
2016-05-22 15:49:40 -04:00
Davis King
5e70b7a2c6
Cleaned up code a little and made the example use a better version of the architecture.
2016-05-22 13:17:10 -04:00
Davis King
b73dacc163
Fixing tests
2016-05-22 10:30:15 -04:00
Davis King
7f77ec6559
Made the batch normalization epsilon user settable rather than being hard coded.
2016-05-22 10:26:23 -04:00
Davis King
b92b226c6a
Added learning rate and weight decay multipliers to the con_, fc_, and bn_
...
layers. Updated the solvers to support this.
2016-05-22 09:59:34 -04:00
Davis King
40f04bebae
Added more tests for the new affine_transform_range()
2016-05-21 23:23:06 -04:00
Davis King
c3a74c7c1c
Added affine_transform_range() and another overload of affine_transform()
2016-05-21 23:22:26 -04:00
Davis King
15b2d7b5d8
Added get_learning_rate_multiplier() and get_weight_decay_multiplier() global
...
functions.
2016-05-21 23:16:49 -04:00
Davis King
58496f9f8a
Added Johannes Huber's natvis file for visual studio.
2016-05-20 08:29:39 -04:00
Davis King
0cd76f899b
Added an error message if a camera isn't available.
2016-05-18 22:22:56 -04:00
Fm
598924098a
depth layer: cuda concat/split moved to cpu/cuda files
2016-05-17 13:36:48 +03:00
Fm
28c4a48281
Grouping layer added
2016-05-17 13:07:04 +03:00
Davis King
617ffba652
Made LIB_INSTALL_DIR only appear when building dlib as an installable library, not when using dlib in another cmake project.
2016-05-15 19:56:55 -04:00
Davis King
b0cf7dc0f9
Now when you print a network to cout it will include the output tensor sizes
...
for each layer if you have passed a tensor through the next.
2016-05-15 16:28:44 -04:00
Davis King
2092e3030b
Renamed compute_loss() to compute_loss_value_and_gradient() in the loss
...
interface.
2016-05-15 15:07:04 -04:00
Davis King
ee2a0070db
Added comment to show how to deserialize a network.
2016-05-15 14:52:33 -04:00
Davis King
ba0f7c5c53
Added a function to dnn_trainer that lets you query the "steps without
...
progress" estimate. I also renamed the get/set functions for the shrink amount
to have a consistent name and use the word "factor" instead of "amount".
2016-05-15 14:48:06 -04:00
Davis King
b974a57513
Added set_learning_rate_schedule() to dnn_trainer.
2016-05-15 14:36:02 -04:00