Age | Commit message (Collapse) | Author |
|
# Conflicts:
# config.py
# models/hpm.py
# models/layers.py
# models/model.py
# models/part_net.py
# models/rgb_part_net.py
# test/part_net.py
# utils/configuration.py
# utils/triplet_loss.py
|
|
1. Separate FCs and triplet losses for HPM and PartNet
2. Remove FC-equivalent 1x1 conv layers in HPM
3. Support adjustable learning rate schedulers
|
|
1. Change ReLU to Leaky ReLU in decoder
2. Add 8-scale-pyramid in HPM
|
|
|
|
|
|
1. Decode features outside of auto-encoder
2. Turn off HPM 1x1 conv by default
3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8`
4. Use mean of canonical embeddings instead of mean of static features
5. Calculate static and dynamic loss separately
6. Calculate mean of parts in triplet loss instead of sum of parts
7. Add switch to log disentangled images
8. Change default configuration
|
|
|
|
|
|
According to [1], we can use GAP and GMP together, or one of both in ablation study.
[1]Y. Fu et al., “Horizontal pyramid matching for person re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 8295–8302.
|
|
1. Wrap fully connected layers
2. Introduce hyperparameter tuning in constructor
|
|
|
|
|
|
1. Add batch normalization and activation to layers
2. VGGConv2d and FocalConv2d inherits to BasicConv2d; DCGANConvTranspose2d inherits to BasicConvTranspose2d
|
|
1. Wrap Conv2d 3x3-padding-1 to VGGConv2d
2. Wrap ConvTranspose2d 4x4-stride-4-padding-1 to DCGANConvTranspose2d
3. Turn off bias in conv since the employment of batch normalization
|
|
|
|
|
|
|