Age | Commit message (Collapse) | Author |
|
|
|
# Conflicts:
# config.py
# models/hpm.py
# models/layers.py
# models/model.py
# models/part_net.py
# models/rgb_part_net.py
# test/part_net.py
# utils/configuration.py
# utils/triplet_loss.py
|
|
1. Separate FCs and triplet losses for HPM and PartNet
2. Remove FC-equivalent 1x1 conv layers in HPM
3. Support adjustable learning rate schedulers
|
|
|
|
|
|
|
|
This reverts commit be508061
|
|
This update separates input data to two batches, which reduces ~30% memory usage.
|
|
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor.
|
|
1. Decode features outside of auto-encoder
2. Turn off HPM 1x1 conv by default
3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8`
4. Use mean of canonical embeddings instead of mean of static features
5. Calculate static and dynamic loss separately
6. Calculate mean of parts in triplet loss instead of sum of parts
7. Add switch to log disentangled images
8. Change default configuration
|
|
|
|
1. Turn off autograd while decoding canonical and pose features
2. Change default batch size to (4, 8)
|
|
|
|
|
|
|
|
|
|
1. Features used in HPM is decoded canonical embedding without transpose convolution
2. Decode pose embedding to image for Part Net
3. Backbone seems to be redundant, we can use feature map given by auto-decoder
|
|
1. Triplet loss function and weight init function haven't been implement yet
2. Tuplize features returned by auto-encoder for later unpack
3. Correct comment error in auto-encoder
4. Swap batch_size dim and time dim in HPM and PartNet in case of redundant transpose
5. Find backbone problems in HPM and disable it temporarily
6. Make feature structure by HPM consistent to that by PartNet
7. Fix average pooling dimension issue and incorrect view change in HP
|
|
|
|
|
|
1. Add default output channels of decoder
2. Replace deprecated torch.nn.functional.sigmoid with torch.sigmoid
|
|
1. Wrap fully connected layers
2. Introduce hyperparameter tuning in constructor
|
|
|
|
1. Add batch normalization and activation to layers
2. VGGConv2d and FocalConv2d inherits to BasicConv2d; DCGANConvTranspose2d inherits to BasicConvTranspose2d
|
|
1. Make activation functions be inplace ops
2. Change Leaky ReLU to ReLU in decoder
|
|
1. Wrap Conv2d 3x3-padding-1 to VGGConv2d
2. Wrap ConvTranspose2d 4x4-stride-4-padding-1 to DCGANConvTranspose2d
3. Turn off bias in conv since the employment of batch normalization
|
|
|
|
|