Age | Commit message (Collapse) | Author |
|
# Conflicts:
# models/model.py
|
|
# Conflicts:
# models/model.py
# utils/dataset.py
|
|
|
|
|
|
# Conflicts:
# models/model.py
|
|
|
|
1. Add new `train_all` method for one shot calling
2. Print time used in 1k iterations
3. Correct label dimension in predict function
4. Transpose distance matrix for convenient indexing
5. Sort dictionary before generate signature
6. Extract visible CUDA setting function
|
|
|
|
|
|
|
|
|
|
# Conflicts:
# models/model.py
|
|
|
|
|
|
|
|
1. Resolve deprecated scheduler stepping issue
2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x)
3. Add ReLU when compute distance in triplet loss
4. Remove classes except Model from `models` package init
|
|
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator.
2. Enable multi-GPU training using torch.nn.DataParallel
|
|
|
|
|
|
|
|
|
|
1. Change initial weights for Conv layers
2. Find a way to init last fc in init_weights
|
|
Recursive apply will override other parameters too
|
|
|
|
|
|
|
|
|
|
|
|
1. Features used in HPM is decoded canonical embedding without transpose convolution
2. Decode pose embedding to image for Part Net
3. Backbone seems to be redundant, we can use feature map given by auto-decoder
|
|
|
|
1. Triplet loss function and weight init function haven't been implement yet
2. Tuplize features returned by auto-encoder for later unpack
3. Correct comment error in auto-encoder
4. Swap batch_size dim and time dim in HPM and PartNet in case of redundant transpose
5. Find backbone problems in HPM and disable it temporarily
6. Make feature structure by HPM consistent to that by PartNet
7. Fix average pooling dimension issue and incorrect view change in HP
|
|
|
|
|
|
1. Register list of torch.nn.Module to the network using torch.nn.ModuleList
2. Fix operation error in squeeze list of tensor
3. Replace squeeze with view in HP in case batch size is 1
|
|
1. Let FocalConv block capable of processing frames in all batches
2. Correct input dims of TFA and output dims of HP
3. Change torch.unsqueeze and torch.cat to torch.stack
|
|
|
|
|
|
According to [1], we can use GAP and GMP together, or one of both in ablation study.
[1]Y. Fu et al., “Horizontal pyramid matching for person re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 8295–8302.
|
|
|
|
We can disentangle features from different subjects, but cannot do it at different temporal orders
|
|
|
|
1. Configuration parsers
2. Model signature generator
|
|
1. Add default output channels of decoder
2. Replace deprecated torch.nn.functional.sigmoid with torch.sigmoid
|
|
1. Wrap fully connected layers
2. Introduce hyperparameter tuning in constructor
|
|
|
|
Disentanglement cannot be processed on different subjects at the same time, we need to load `pr` subjects one by one. The batch splitter will return a pr-length list of tuples (with 2 dicts containing k-length lists of labels, conditions, view and k-length tensor of clip data, representing condition 1 and condition 2 respectively).
|
|
|
|
|
|
1. Add batch normalization and activation to layers
2. VGGConv2d and FocalConv2d inherits to BasicConv2d; DCGANConvTranspose2d inherits to BasicConvTranspose2d
|
|
|