Commit Graph

101 Commits

Author SHA1 Message Date
Brandon Amos 1267e1abb3 Fix the data type image.load
Thanks Daniil Belkov!

`byte` was defaulting to `"float"`
2016-10-07 14:27:21 -04:00
melgor 4fad635aa4 Fix testing 2016-07-13 19:28:11 +02:00
Brandon Amos 2582f43797 dataset.lua: Comment out unused torch.linspace. 2016-07-12 12:06:12 -04:00
Brandon Amos 6534788a41
Use torch.range instead of torch.linspace because of rounding issues.
For #160
2016-07-12 11:45:15 -04:00
melgor 4c0e55dfd5 Rebase TripletEmbedding to get speedup at forward pass 2016-07-12 08:02:53 +02:00
melgor 0995e86b7e Added Multi-GPU support #106 2016-07-04 22:23:37 +02:00
Brandon Amos e489b7a7b8 Training: Correct space after device param when testing. 2016-06-19 23:42:15 -04:00
Brandon Amos b82f6c31d6 Training: Pass device option to batch-represent. 2016-06-19 21:44:00 -04:00
Brandon Amos 0574a72641 Initial resnet model definition. 2016-06-19 17:13:21 -04:00
Brandon Amos 2c6ddfba2d Training: Model: Use opt.imgDim. 2016-06-16 20:12:21 -04:00
Brandon Amos 1fb8abc247 Training: Improve -data description. 2016-06-14 15:56:37 -04:00
Brandon Amos 92684ab771 Training: Add cuda device option. 2016-06-14 15:55:46 -04:00
Brandon Amos a1b3251f96 Actually fix nans from #127. Error if they appear. 2016-06-14 13:51:18 -04:00
Brandon Amos c04386c48c Training: Only convert to cuda in cuda mode. 2016-06-13 14:12:36 -04:00
Brandon Amos f4567e882e Fixes for travis. 2016-06-13 14:06:12 -04:00
Brandon Amos fea2fb41d8 Resolve nan issues for #127 (hopefully) 2016-06-13 13:50:38 -04:00
Brandon Amos aaab8fe119 Training: Update plot-loss to new directory structure. 2016-06-13 13:50:38 -04:00
Brandon Amos c6f0962b6d Training: Add -save option. Default to a timestamp instead of numbers. 2016-06-13 13:50:38 -04:00
Brandon Amos bced2bb445 Disable cudnn by default for #127. Add warning. 2016-06-07 18:20:46 -04:00
Brandon Amos 21f1bcc5c4 Temporarily remove optimize_net for #127. 2016-06-07 18:14:13 -04:00
Brandon Amos 130edcecc9 Training: Use ADAM instead of adadelta. 2016-06-03 17:48:38 -04:00
Brandon Amos b4df1f9644 Revert dataset.lua for #132. 2016-05-08 12:11:59 -04:00
Brandon Amos 0707db5718 Fix maxPathLength assertion/increment.
Thanks for the correction @aaronnech, I read your earlier suggestion too fast :-)

For #132
2016-05-07 20:43:53 -04:00
Brandon Amos 85d7375fd9 Remove unused variables/functions to fix luacheck warnings.
\cc @aaronnech

For #132
2016-05-07 20:33:52 -04:00
Brandon Amos b11910b1da Fix assert bound.
\cc @aaronnech

For #132
2016-05-07 20:30:17 -04:00
Brandon Amos 1d5491a1e8 Add @aaronnech's dataset.lua improvements for #132. 2016-05-07 20:25:27 -04:00
Brandon Amos eba58604c9 Bump matplotlib dependency for 'style'.
https://groups.google.com/forum/#openface/vJFioAQaJo8
2016-04-06 10:44:51 -04:00
Brandon Amos 66efcb706b Model: Use opt.imgDim for retraining
\cc @melgor

Ideally I should have made imgDim part of the model.
This works for now, but users training will need to be
careful about this size.
2016-04-05 18:02:45 -04:00
Brandon Amos 1b3b48f588 Load util before model for optimizeNet
\cc @melgor
2016-04-03 02:48:50 -04:00
melgor 505c63f828 Added check if 'optnet' package is installed 2016-04-01 13:04:25 +02:00
melgor 46798c97cc Change variable to local 2016-03-31 12:00:17 +02:00
melgor bf6a466484 Added optnet for reducing memory consumption 2016-03-30 23:21:13 +02:00
Brandon Amos 6c43f6ba38 Training: Pass imgDim to batchRepresent. Add lfwDir option.
For #112.
2016-03-19 15:44:47 -04:00
Brandon Amos b30d6d9461 Training Plotting code: Add minor gridlines. 2016-03-18 14:10:53 -04:00
melgor bba911c802 Replace satitize by clearState() 2016-03-16 18:20:11 +01:00
Brandon Amos e7c574829d Merge pull request #108 from melgor/cudnn_convert
Replace nn_to_cudnn by cudnn.convert #106
2016-03-16 11:54:20 -04:00
melgor 7d21caf004 Merge branch 'cudnn_convert' of https://github.com/melgor/openface into cudnn_convert
Conflicts:
	training/model.lua
	training/train.lua
2016-03-16 16:15:05 +01:00
melgor f63acb637e Replace nn_to_cudnn by cudnn.convert #106 2016-03-16 15:38:06 +01:00
Brandon Amos 6fd4ac8604 Training: Sample from all of a person's images.
This fixes a bug in the training code that caused
only the first (same) `nSamplesPerClass` images in each
class to be sampled every time rather than sampling
from all of the person's images.
2016-03-15 16:27:09 -04:00
melgor e2d7513ff5 Replace nn_to_cudnn by cudnn.convert #106 2016-03-11 11:09:23 +01:00
Brandon Amos 71b6bbf765 Fix flake warning. 2016-03-08 20:39:03 -05:00
Brandon Amos 665cb4c9d7 Training plot-loss.py: Improve triplet loss plot bounds. 2016-03-08 18:36:58 -05:00
Brandon Amos 04d23481fb Fix typo. 2016-03-06 21:15:10 -05:00
Brandon Amos 4616f61ff8 DNN Training: Halve default epochSize. 2016-03-06 20:03:54 -05:00
Brandon Amos f17a36c2fd plot-loss: Import sys. 2016-03-06 19:56:12 -05:00
Brandon Amos f112398178 DNN Training: Plot LFW accuracies for #100. 2016-03-06 19:46:33 -05:00
Brandon Amos bbff0a01dd Fix tests for training the DNN. 2016-03-06 19:45:37 -05:00
Brandon Amos f5766412d2 NN Training: Add LFW validation for #100. 2016-03-04 18:30:39 -05:00
Brandon Amos e4c4aef3e4 Training: Zero pad experiment work directory numbers. 2016-03-04 18:09:55 -05:00
Brandon Amos 540b7d03c6 Fix luacheck warning. 2016-02-28 15:08:50 -05:00