Commit Graph

5441 Commits

Author SHA1 Message Date
Davis King 504dff2d63 removing cruft 2016-01-24 07:12:32 -05:00
Davis King cc5a62cd0b Made affine_transform() routines a little faster. 2016-01-24 07:10:43 -05:00
Davis King 919cbd1103 Added a multiply_ layer and set it up so you can use it instead of dropout_
after training has finished.
2016-01-24 07:03:06 -05:00
Davis King 565bed38f7 Made it so you can deserialize bn_ objects into affine_ objects. 2016-01-23 20:06:11 -05:00
Davis King c781316343 merged 2016-01-23 19:52:48 -05:00
Davis King a9e1c9e457 Made add_loss_layer's batch operator() more general. 2016-01-23 19:48:47 -05:00
Davis King 27274c17d9 updated docs 2016-01-23 19:46:31 -05:00
Davis King 3ca91aae81 Added unserialize. 2016-01-23 19:42:48 -05:00
Davis King 846a570479 Added an overload of operator() that lets you easily run a network on an
entire std::vector of objects.
2016-01-23 12:06:51 -05:00
Davis King 93ab80c758 Made the affine_ layer support being constructed from bn_ layers. Also added
unit tests for the routines supporting this feature.
2016-01-23 11:29:21 -05:00
Davis King 669a1e171d Added affine_transform_conv() and multiply_conv() as well as a CPU
implementation of assign_conv_bias_gradient().
2016-01-23 10:40:15 -05:00
Davis King e44b2aa266 Added missing requirements check. 2016-01-23 10:37:21 -05:00
Davis King 21e3e81fa0 Changed dot() so it doesn't call cublasSdot anymore since cublasSdot gives the
wrong outputs sometimes.
2016-01-23 09:40:04 -05:00
Davis King 316099a93f Made the search for the installed matlab a little more robust. 2016-01-22 12:25:47 -05:00
Davis King 24687a9eaa Added grid_stride_range_y cuda tool 2016-01-22 09:16:20 -05:00
Davis E. King e0995a6740 Merge pull request #71 from paulinus/handle-no-encoding
Decode message only if encoding is known
2016-01-20 06:15:44 -05:00
Pau Gargallo 05e2471555 Decode message only if encoding is known
When python does not know the encoding of stdout, sys.stdout.encoding
is None.  Then calling decode(None) raises an exception.  We just
skip decoding when the encoding is unknown.
2016-01-20 12:06:17 +01:00
Davis King 32dd3f2fe3 Fixed typo in comment 2016-01-17 16:20:50 -05:00
Davis King c8c55a8961 updated release notes 2016-01-17 12:50:21 -05:00
Davis King 373bc75f45 Added cpp11 tag to the docs and also updated them to include the new
running_gradient object.
2016-01-17 12:45:00 -05:00
Davis King 732ddefdd2 Removed link to dlib/all/source.cpp since it seems to confuse some users. 2016-01-17 12:17:58 -05:00
Davis King eee0d295c3 Improved error message you get about linking to libjpeg and libpng
if you try to open a jpeg or png file.
2016-01-17 12:06:53 -05:00
Davis King da6e48071c Added some preprocessor stuff to check if the user is #including
dlib/all/source.cpp into their own .cpp files.  If so they will get a useful
error message.
2016-01-17 11:54:31 -05:00
Davis King 12d9d257f2 Put guards around some GCC specific #pragma statements to avoid warnings in
visual studio.
2016-01-16 09:01:22 -05:00
Davis E. King e1ff23fdb5 Merge pull request #68 from yukoba/patch-1
sys.stdout.encoding instead of latin-1 in setup.py
2016-01-14 18:56:27 -05:00
Davis E. King 80e6443d83 Merge pull request #69 from severin-lemaignan/auto-ptr-guards
Add pragma guards around deprecated auto_ptr to prevent GCC warnings
2016-01-14 18:47:23 -05:00
Séverin Lemaignan d873810ee4 Add pragma guards around deprecated auto_ptr to prevent GCC warnings
Fixes #67
2016-01-14 13:18:15 +00:00
Yu Kobayashi d35104ed3c sys.stdout.encoding instead of latin-1 in setup.py
Please use sys.stdout.encoding instead of latin-1 in setup.py.
This is necessary for non English OS.
2016-01-14 11:07:18 +09:00
Davis King 55748d93c9 Made train_one_step() print stuff in verbose mode. 2016-01-11 20:38:04 -05:00
Davis King 6bd5c2e395 Made cmake use the built in find X11 scripts by default on OS X. 2016-01-10 18:31:49 -05:00
Davis King 4b2178c6e6 Made trainer disk synchronization more reliable and efficient. 2016-01-09 11:57:04 -05:00
Davis King 08f965a32b Clarified spec 2016-01-09 11:56:37 -05:00
Davis King 6841222120 Improved the dnn_trainer. In particular, it no longer makes a copy of the
network (which would needlessly double VRAM usage).  I also added a
set_synchronization_file() method so you can tell it to automatically
synchronize itself to disk every so often during training.  This makes resuming
an interrupted training session trivially easy.
2016-01-09 11:50:12 -05:00
Davis King 4189386ddb Increased the default sgd learning rate. 2016-01-09 09:39:07 -05:00
Davis King 9f92b082a3 Now training will automatically reduce the learning rate when it is clear that
the loss isn't being reduced.  Also, there is a stopping condition now based on
how large the current learning rate is.  That is, training stops when the learning
rate gets small enough and it is clear that no progress is being made.
2016-01-09 09:37:00 -05:00
Davis King 6f63bc6279 saving comments 2016-01-09 08:16:33 -05:00
Davis King 537da11f38 merged 2016-01-08 18:25:13 -05:00
Davis King 63734971a0 Fixed compile time error I just introduced 2016-01-08 18:23:58 -05:00
Davis King f47620c11f merged 2016-01-08 18:16:34 -05:00
Davis King ab2cd12915 Made running_gradient serializable. 2016-01-08 18:15:15 -05:00
Davis King f4f8e4db72 merged 2016-01-08 07:48:41 -05:00
Davis King 5435c56d8c merged 2016-01-07 18:25:45 -05:00
Davis King d73f58ae1c Added running_gradient 2016-01-07 18:24:49 -05:00
Davis King 368d6d19ca Added CPU version of pooling layer code. 2016-01-04 17:58:00 -05:00
Davis King 2639a5233e Improved outputs from test_layer(). 2016-01-04 17:55:59 -05:00
Davis King fb49f0ceab Fixed a bug where the trainer didn't initialize the solvers unless you
explicitly gave it a solver.
2016-01-03 12:03:00 -05:00
Davis King cbdeb1608f Made add() faster by calling my own version for the simple pointwise add case. 2016-01-03 11:44:54 -05:00
Davis King 30005b7ee3 Wrapped new dot() function into the tt namespace and gave it a CPU version. 2016-01-03 11:21:40 -05:00
Davis King d248a22571 Added the launch_kernel() function that launches a kernel by smartly picking
the number of threads and blocks rather than using the hard coded numbers I had
in there.  This makes some functions noticeably faster.

Also added a dot() function that is fully asynchronous.
2016-01-03 11:20:49 -05:00
Davis King 6a64180200 Minor cleanup 2016-01-02 17:16:42 -05:00