Commit Graph

2 Commits

Author SHA1 Message Date
Juha Reunanen bd6994cc66 Add new loss layer for binary loss per pixel (#1976)
* Add new loss layer for binary loss per pixel
2020-01-20 07:47:47 -05:00
Juha Reunanen d175c35074 Instance segmentation (#1918)
* Add instance segmentation example - first version of training code

* Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront

* Improve console output

* Set filter count

* Minor tweaking

* Inference - first version, at least compiles!

* Ignore overlapped boxes

* Ignore even small instances

* Set overlaps_ignore

* Add TODO remarks

* Revert "Set overlaps_ignore"

This reverts commit 65adeff1f8.

* Set result size

* Set label image size

* Take ignore-color into account

* Fix the cropping rect's aspect ratio; also slightly expand the rect

* Draw the largest findings last

* Improve masking of the current instance

* Add some perturbation to the inputs

* Simplify ground-truth reading; fix random cropping

* Read even class labels

* Tweak default minibatch size

* Learn only one class

* Really train only instances of the selected class

* Remove outdated TODO remark

* Automatically skip images with no detections

* Print to console what was found

* Fix class index problem

* Fix indentation

* Allow to choose multiple classes

* Draw rect in the color of the corresponding class

* Write detector window classes to ostream; also group detection windows by class (when ostreaming)

* Train a separate instance segmentation network for each classlabel

* Use separate synchronization file for each seg net of each class

* Allow more overlap

* Fix sorting criterion

* Fix interpolating the predicted mask

* Improve bilinear interpolation: if output type is an integer, round instead of truncating

* Add helpful comments

* Ignore large aspect ratios; refactor the code; tweak some network parameters

* Simplify the segmentation network structure; make the object detection network more complex in turn

* Problem: CUDA errors not reported properly to console
Solution: stop and join data loader threads even in case of exceptions

* Minor parameters tweaking

* Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value

* Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing

* Improve behaviour when loss actually increased after disk sync

* Revert some of the earlier change

* Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync

* Revert "Revert some of the earlier change"

This reverts commit 6c852124ef.

* Keep enough previous loss values, until the disk sync

* Fix maintaining the dumped (now "effectively disregarded") loss values count

* Detect cats instead of aeroplanes

* Add helpful logging

* Clarify the intention and the code

* Review fixes

* Add operator== for the other pixel types as well; remove the inline

* If available, use constexpr if

* Revert "If available, use constexpr if"

This reverts commit 503d4dd335.

* Simplify code as per review comments

* Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh

* Clarify console output

* Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh"

This reverts commit 9191ebc776.

* To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now)

* Even empty out some of the previous test loss values

* Minor review fixes

* Can't use C++14 features here

* Do not use the struct name as a variable name
2019-11-14 22:53:16 -05:00