Davis King
c48a6af814
Added a way to get the final gradient with respect to the inputs. Also added a
...
method to more efficiently give the input gradient in some instances.
2015-12-30 20:39:12 -05:00
Davis King
3597df5eb6
Made add_layer hold subnetworks though a pointer so that most of a
...
network is allocated on the heap rather than resulting in really large
stack usage for large networks.
2015-12-30 20:32:26 -05:00
Davis King
72b250bb16
Clarified spec
2015-12-30 20:30:32 -05:00
Davis King
667b60dbb1
Added the add_prev_ layer
2015-12-24 11:30:16 -05:00
Davis King
fb2fa0f7ca
Added another add() function for adding tensors. This one lets you add
...
tensors with different sizes and it will zero pad them as needed.
2015-12-24 10:44:37 -05:00
Davis King
ca77640492
Added pack_idx() and unpack_idx().
2015-12-24 10:40:53 -05:00
Davis King
b66c5254ba
Made the tuple based layer constructors work with nested tuples so you can
...
define combination layers made out of other combination layers without being
hassled by the compiler.
2015-12-24 09:23:22 -05:00
Davis King
d2516bc2f7
Just renamed two functions to way better names.
2015-12-23 22:29:31 -05:00
Davis King
1f5aa6c1fa
Added an option to fc_ to enable or disable a bias term.
2015-12-23 22:25:17 -05:00
Davis King
8837698043
Added an avg_pool_ layer. Also fixed some errors in the layer specs.
2015-12-23 21:44:21 -05:00
Davis King
5875fa75ca
Change to suppress compiler warning.
2015-12-23 21:31:35 -05:00
Davis King
c627898eee
Fixed the tag and skip layers so they compile now that we have the
...
in-place/out-of-place logic present.
2015-12-23 20:58:31 -05:00
Davis King
7bb7f8a288
Clarified spec
2015-12-23 20:18:04 -05:00
Davis King
18695b7b4b
Made the default input layer automatically normalize unsigned char pixel values
...
to the range [0,1].
2015-12-23 08:23:46 -05:00
Davis King
09564840a1
Reverted cmake file back to it's proper state. Oops.
2015-12-23 08:05:08 -05:00
Davis King
28475b8d0d
Made computed_output an optional argument to backward_inplace() so there is
...
symmetry between the non-inplace version. This also enables additional
optimizations in the resulting network.
2015-12-23 08:03:31 -05:00
Davis King
122f2fa6b5
Upgraded to cuDNN v4.0. This means changing the binding to max_pool a little
...
since that's a little different in cuDNN. I also removed my CUDA code for
doing batch normalization and replaced it with cuDNN's new batch normalization
methods.
Finally, I forgot to add a convolutional option to the bn_ object. Now it has
one so you can set the mode however you like, either BATCH_NORM_FC or
BATCH_NORM_CONV.
2015-12-22 22:01:00 -05:00
Davis King
ea2947d0e7
merged
2015-12-16 23:12:14 -05:00
Davis King
15fb25277c
Gave the batch normalization layer an automatic testing mode that causes
...
it to use the saved average mean and invstd to scale the data instead of
normalizing the batch.
2015-12-16 23:11:03 -05:00
Davis King
59ae6a6061
Added specs for bn, affine, and max_pool layers.
2015-12-15 19:43:09 -05:00
Davis King
2e01920fc2
Changed the type used to represent labels so it's more consistent
...
with other parts of the library.
2015-12-14 20:12:06 -05:00
Davis King
8dbb42e6e6
Added spec for loss_multiclass_log_ and fixed some typos.
2015-12-13 12:46:18 -05:00
Davis King
351a6331e9
Added loss_multiclass_log_
2015-12-13 12:21:54 -05:00
Davis King
045266261f
Fixed errant const.
2015-12-13 12:21:25 -05:00
Davis King
b7e127f212
Fixed old tests. Also added more max pooling tests.
2015-12-12 12:59:29 -05:00
Davis King
18d0f0f4d3
Added test for max pool layer.
2015-12-12 12:53:43 -05:00
Davis King
df7d7f0347
Added max_pool_ layer.
2015-12-12 12:53:27 -05:00
Davis King
7ae43ae2d5
Fixed some resource leaks. Also fixed max_pool so it does exactly what the
...
spec says it should.
2015-12-12 12:52:32 -05:00
Davis King
cbd57be677
Made test_layers() a little more robust.
2015-12-12 12:51:29 -05:00
Davis King
9065f08c35
removed cruft
2015-12-12 12:05:09 -05:00
Davis King
a4c5983a82
Improved softmax tests
2015-12-12 12:01:47 -05:00
Davis King
0a832e42d0
Fixed bug in softmax gradient computation.
2015-12-12 12:01:27 -05:00
Davis King
4aa0e3bec3
updated tests
2015-12-12 10:33:27 -05:00
Davis King
ee6e54b4e1
Figured out the *undocumented* requirements for calling cuDNN's
...
cudnnAddTensor() function and updated the specs and asserts accordingly.
2015-12-12 10:33:13 -05:00
Davis King
0babe27aac
Fixed race condition that could happen if set_size() was called while a cuda
...
kernel was still running.
2015-12-12 10:31:50 -05:00
Davis King
0f840db31e
saving tests for add()
2015-12-08 22:53:51 -05:00
Davis King
3559f78528
Improved cudnn error messages.
2015-12-08 22:53:13 -05:00
Davis King
d179693475
Made test_layer() more robust.
2015-12-08 22:53:02 -05:00
Davis King
2ba29f6537
Updated multiply()'s CUDA implementation to reflect it's new features. Also added
...
CUDA version of add_bias_gradient().
2015-12-08 22:25:00 -05:00
Davis King
2f34414e49
Added affine_ layer.
2015-12-08 21:32:48 -05:00
Davis King
8062663c78
Added cpu version of add() and also added new add_bias_gradient() function.
2015-12-08 21:32:32 -05:00
Davis King
20d46fc550
Added missing assert check
2015-12-08 21:31:44 -05:00
Davis King
a29086bf7b
Made multiply() more flexible and also fixed a bug in the CPU implementation of
...
batch_normalize_conv.
2015-12-08 18:49:12 -05:00
Davis King
29f56b12c4
Made the affine_transform functions consistent.
2015-12-08 17:44:48 -05:00
Davis King
7678643079
Added the dropout layer
2015-12-08 17:18:34 -05:00
Davis King
3222a3af6b
Clarified spec
2015-12-08 08:25:29 -05:00
Davis King
5f8e41a889
Added another version of multiply()
2015-12-08 08:25:11 -05:00
Davis King
79344339df
Added bn layer tests.
2015-12-07 21:43:39 -05:00
Davis King
7550681b3c
Implemented the bn layer.
2015-12-07 21:43:24 -05:00
Davis King
363b6b2f6d
Increased default mini-batch size to 32.
2015-12-07 21:42:45 -05:00