Age | Commit message (Collapse) | Author |
|
|
|
1. Separate FCs and triplet losses for HPM and PartNet
2. Remove FC-equivalent 1x1 conv layers in HPM
3. Support adjustable learning rate schedulers
|
|
|
|
|
|
[1]A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017.
|
|
1. Change ReLU to Leaky ReLU in decoder
2. Add 8-scale-pyramid in HPM
|
|
|
|
|
|
|
|
|
|
[1]C. Fan et al., “GaitPart: Temporal Part-Based Model for Gait Recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14225–14233.
|
|
1. Decode features outside of auto-encoder
2. Turn off HPM 1x1 conv by default
3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8`
4. Use mean of canonical embeddings instead of mean of static features
5. Calculate static and dynamic loss separately
6. Calculate mean of parts in triplet loss instead of sum of parts
7. Add switch to log disentangled images
8. Change default configuration
|
|
|
|
1. Turn off autograd while decoding canonical and pose features
2. Change default batch size to (4, 8)
|
|
|
|
|
|
|
|
1. Separate hyperparameter configs in model, optimizer and scheduler
2. Add more tunable hyperparameters in optimizer and scheduler
|
|
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator.
2. Enable multi-GPU training using torch.nn.DataParallel
|
|
|
|
|
|
Use `dir` instead of `path`
|
|
|