summaryrefslogtreecommitdiff
path: root/models/auto_encoder.py
AgeCommit message (Collapse)Author
2021-04-04Merge branch 'disentangling_only' into disentangling_only_py3.8disentangling_only_py3.8Jordan Gong
2021-04-04Add cross entropy lossdisentangling_onlyJordan Gong
2021-04-03Merge branch 'disentangling_only' into disentangling_only_py3.8Jordan Gong
# Conflicts: # models/model.py
2021-04-03Merge branch 'master' into disentangling_onlyJordan Gong
# Conflicts: # config.py # models/hpm.py # models/layers.py # models/model.py # models/part_net.py # models/rgb_part_net.py # test/part_net.py # utils/configuration.py # utils/triplet_loss.py
2021-03-12Code refactoringJordan Gong
1. Separate FCs and triplet losses for HPM and PartNet 2. Remove FC-equivalent 1x1 conv layers in HPM 3. Support adjustable learning rate schedulers
2021-02-21Remove FConv blocksJordan Gong
2021-02-20Separate triplet loss from modelJordan Gong
2021-02-18Merge branch 'master' into python3.8Jordan Gong
2021-02-18Implement adjustable input size and change some default configsJordan Gong
2021-02-15Revert "Memory usage improvement"Jordan Gong
This reverts commit be508061
2021-02-15Revert "Memory usage improvement"Jordan Gong
This reverts commit be508061
2021-02-14Merge branch 'master' into python3.8Jordan Gong
2021-02-14Memory usage improvementJordan Gong
This update separates input data to two batches, which reduces ~30% memory usage.
2021-02-09Merge branch 'master' into python3.8Jordan Gong
# Conflicts: # models/rgb_part_net.py
2021-02-09Improve performance when disentanglingJordan Gong
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor.
2021-02-08Merge branch 'master' into python3.8Jordan Gong
# Conflicts: # models/hpm.py # models/layers.py # models/model.py # models/rgb_part_net.py # utils/configuration.py
2021-02-08Code refactoring, modifications and new featuresJordan Gong
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration
2021-01-23Remove the third term in canonical consistency lossJordan Gong
2021-01-21Merge branch 'master' into python3.8Jordan Gong
# Conflicts: # utils/configuration.py
2021-01-21Bug fixesJordan Gong
1. Turn off autograd while decoding canonical and pose features 2. Change default batch size to (4, 8)
2021-01-12Merge branch 'master' into python3.8Jordan Gong
# Conflicts: # models/model.py
2021-01-09Change auto-encoder input in evaluationJordan Gong
2021-01-07Type hint for python version lower than 3.9Jordan Gong
2021-01-06Add CUDA supportJordan Gong
2021-01-03Delete dead training judgeJordan Gong
2021-01-02Separate training and evaluatingJordan Gong
2021-01-02Correct feature dims after disentanglement and HPM backbone removalJordan Gong
1. Features used in HPM is decoded canonical embedding without transpose convolution 2. Decode pose embedding to image for Part Net 3. Backbone seems to be redundant, we can use feature map given by auto-decoder
2020-12-31Implement some parts of RGB-GaitPart wrapperJordan Gong
1. Triplet loss function and weight init function haven't been implement yet 2. Tuplize features returned by auto-encoder for later unpack 3. Correct comment error in auto-encoder 4. Swap batch_size dim and time dim in HPM and PartNet in case of redundant transpose 5. Find backbone problems in HPM and disable it temporarily 6. Make feature structure by HPM consistent to that by PartNet 7. Fix average pooling dimension issue and incorrect view change in HP
2020-12-29Return canonical features at condition 1 for later aggregationJordan Gong
2020-12-28Wrap the auto-encoder, return 3 losses at t2Jordan Gong
2020-12-27Fix inconsistency and API deprecation issues in decoderJordan Gong
1. Add default output channels of decoder 2. Replace deprecated torch.nn.functional.sigmoid with torch.sigmoid
2020-12-27Refine auto-encoderJordan Gong
1. Wrap fully connected layers 2. Introduce hyperparameter tuning in constructor
2020-12-24Optimize importsJordan Gong
2020-12-24Change the usage of layers and reorganize relations of layersJordan Gong
1. Add batch normalization and activation to layers 2. VGGConv2d and FocalConv2d inherits to BasicConv2d; DCGANConvTranspose2d inherits to BasicConvTranspose2d
2020-12-23Modify activation functions after conv or trans-conv in auto-encoderJordan Gong
1. Make activation functions be inplace ops 2. Change Leaky ReLU to ReLU in decoder
2020-12-23Refactor and refine auto-encoderJordan Gong
1. Wrap Conv2d 3x3-padding-1 to VGGConv2d 2. Wrap ConvTranspose2d 4x4-stride-4-padding-1 to DCGANConvTranspose2d 3. Turn off bias in conv since the employment of batch normalization
2020-12-23Reshape feature before decodeJordan Gong
2020-12-23Split modules to different filesJordan Gong