Age | Commit message (Collapse) | Author | |
---|---|---|---|
2021-03-05 | Calculate losses outside modules | Jordan Gong | |
2021-03-04 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-04 | Replace detach with no_grad in evaluation | Jordan Gong | |
2021-03-04 | Set seed for reproducibility | Jordan Gong | |
2021-03-02 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-02 | Fix DataParallel specific bugs | Jordan Gong | |
2021-03-02 | Record learning rate every step | Jordan Gong | |
2021-03-02 | Fix bugs in new scheduler | Jordan Gong | |
2021-03-01 | Bug fixes | Jordan Gong | |
2021-03-01 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-03-01 | New scheduler and new config | Jordan Gong | |
2021-03-01 | Move pairs variable to local | Jordan Gong | |
2021-02-28 | Implement sum of loss default in [1] | Jordan Gong | |
[1]A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017. | |||
2021-02-28 | Log n-ile embedding distance and norm | Jordan Gong | |
2021-02-28 | Modify default parameters | Jordan Gong | |
1. Change ReLU to Leaky ReLU in decoder 2. Add 8-scale-pyramid in HPM | |||
2021-02-27 | Implement Batch Hard triplet loss and soft margin | Jordan Gong | |
2021-02-26 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-02-26 | Fix predict function | Jordan Gong | |
2021-02-21 | Remove FConv blocks | Jordan Gong | |
2021-02-20 | Separate triplet loss from model | Jordan Gong | |
2021-02-20 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-02-20 | Separate triplet loss from model | Jordan Gong | |
2021-02-19 | Correct cross reconstruction loss calculated in DataParallel | Jordan Gong | |
2021-02-19 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-19 | Allow evaluate unfinished model | Jordan Gong | |
2021-02-18 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-18 | Implement adjustable input size and change some default configs | Jordan Gong | |
2021-02-18 | Remove 1x1 conv layers when not used | Jordan Gong | |
2021-02-18 | Decode mean appearance feature | Jordan Gong | |
2021-02-18 | Decode mean appearance feature | Jordan Gong | |
2021-02-16 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-16 | Split transform and evaluate method | Jordan Gong | |
2021-02-15 | Add DataParallel support on new codebase | Jordan Gong | |
2021-02-15 | Revert "Memory usage improvement" | Jordan Gong | |
This reverts commit be508061 | |||
2021-02-14 | Memory usage improvement | Jordan Gong | |
This update separates input data to two batches, which reduces ~30% memory usage. | |||
2021-02-14 | Prepare for DataParallel | Jordan Gong | |
2021-02-10 | Save scheduler state_dict | Jordan Gong | |
2021-02-09 | Improve performance when disentangling | Jordan Gong | |
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor. | |||
2021-02-09 | Some optimizations | Jordan Gong | |
1. Scheduler will decay the learning rate of auto-encoder only 2. Write learning rate history to tensorboard 3. Reduce image log frequency | |||
2021-02-08 | Code refactoring, modifications and new features | Jordan Gong | |
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration | |||
2021-01-23 | Remove the third term in canonical consistency loss | Jordan Gong | |
2021-01-23 | Add late start support for non-disentangling parts | Jordan Gong | |
2021-01-23 | Transform all frames together in evaluation | Jordan Gong | |
2021-01-23 | Evaluation bug fixes and code review | Jordan Gong | |
1. Return full cached clip in evaluation 2. Add multi-iter checkpoints support in evaluation 3. Remove duplicated code while transforming | |||
2021-01-22 | Handle unexpected restore iter | Jordan Gong | |
1. Skip finished model before load it 2. Raise error when restore iter is greater than total iter | |||
2021-01-21 | Print average losses after 100 iters | Jordan Gong | |
2021-01-21 | Bug fixes | Jordan Gong | |
1. Turn off autograd while decoding canonical and pose features 2. Change default batch size to (4, 8) | |||
2021-01-14 | Enable optimizer fine tuning | Jordan Gong | |
2021-01-14 | Remove DataParallel | Jordan Gong | |
2021-01-13 | Update config file and convert int to str when joining | Jordan Gong | |