Age | Commit message (Collapse) | Author |
|
|
|
|
|
1. Register list of torch.nn.Module to the network using torch.nn.ModuleList
2. Fix operation error in squeeze list of tensor
3. Replace squeeze with view in HP in case batch size is 1
|
|
1. Let FocalConv block capable of processing frames in all batches
2. Correct input dims of TFA and output dims of HP
3. Change torch.unsqueeze and torch.cat to torch.stack
|
|
|
|
|
|
According to [1], we can use GAP and GMP together, or one of both in ablation study.
[1]Y. Fu et al., “Horizontal pyramid matching for person re-identification,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 8295–8302.
|
|
|
|
We can disentangle features from different subjects, but cannot do it at different temporal orders
|
|
|
|
1. Configuration parsers
2. Model signature generator
|
|
1. Add default output channels of decoder
2. Replace deprecated torch.nn.functional.sigmoid with torch.sigmoid
|
|
1. Wrap fully connected layers
2. Introduce hyperparameter tuning in constructor
|
|
|
|
Disentanglement cannot be processed on different subjects at the same time, we need to load `pr` subjects one by one. The batch splitter will return a pr-length list of tuples (with 2 dicts containing k-length lists of labels, conditions, view and k-length tensor of clip data, representing condition 1 and condition 2 respectively).
|
|
|
|
|
|
1. Add batch normalization and activation to layers
2. VGGConv2d and FocalConv2d inherits to BasicConv2d; DCGANConvTranspose2d inherits to BasicConvTranspose2d
|
|
|
|
1. Make activation functions be inplace ops
2. Change Leaky ReLU to ReLU in decoder
|
|
1. Wrap Conv2d 3x3-padding-1 to VGGConv2d
2. Wrap ConvTranspose2d 4x4-stride-4-padding-1 to DCGANConvTranspose2d
3. Turn off bias in conv since the employment of batch normalization
|
|
|
|
|
|
|
|
|