Age | Commit message (Collapse) | Author | |
---|---|---|---|
2021-03-25 | Merge branch 'master' into data_paralleldata_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-03-25 | Bug fixes and refactoring | Jordan Gong | |
1. Correct trained model signature 2. Move `val_size` to system config | |||
2021-03-23 | Fix indexing bugs in validation dataset selector | Jordan Gong | |
2021-03-22 | Add embedding visualization and validate on testing set | Jordan Gong | |
2021-03-16 | Set *_iter as *_iters in default | Jordan Gong | |
2021-03-15 | Remove redundant wrapper given by dataloader | Jordan Gong | |
2021-03-15 | Fix redundant gallery_dataset_meta assignment | Jordan Gong | |
2021-03-15 | Support transforming on training datasets | Jordan Gong | |
2021-03-12 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-12 | Fix a typo when record none-zero counts | Jordan Gong | |
2021-03-12 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/auto_encoder.py # models/model.py # models/rgb_part_net.py | |||
2021-03-12 | Make evaluate method static | Jordan Gong | |
2021-03-12 | Code refactoring | Jordan Gong | |
1. Separate FCs and triplet losses for HPM and PartNet 2. Remove FC-equivalent 1x1 conv layers in HPM 3. Support adjustable learning rate schedulers | |||
2021-03-10 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-10 | Bug fixes | Jordan Gong | |
1. Resolve reference problems when parsing dataset selectors 2. Transform gallery using different models | |||
2021-03-05 | Calculate losses outside modules | Jordan Gong | |
2021-03-04 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-04 | Replace detach with no_grad in evaluation | Jordan Gong | |
2021-03-04 | Set seed for reproducibility | Jordan Gong | |
2021-03-02 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-03-02 | Fix DataParallel specific bugs | Jordan Gong | |
2021-03-02 | Record learning rate every step | Jordan Gong | |
2021-03-02 | Fix bugs in new scheduler | Jordan Gong | |
2021-03-01 | Bug fixes | Jordan Gong | |
2021-03-01 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-03-01 | New scheduler and new config | Jordan Gong | |
2021-03-01 | Move pairs variable to local | Jordan Gong | |
2021-02-28 | Implement sum of loss default in [1] | Jordan Gong | |
[1]A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017. | |||
2021-02-28 | Log n-ile embedding distance and norm | Jordan Gong | |
2021-02-27 | Implement Batch Hard triplet loss and soft margin | Jordan Gong | |
2021-02-26 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-02-26 | Fix predict function | Jordan Gong | |
2021-02-20 | Separate triplet loss from model | Jordan Gong | |
2021-02-20 | Merge branch 'master' into data_parallel | Jordan Gong | |
# Conflicts: # models/model.py | |||
2021-02-20 | Separate triplet loss from model | Jordan Gong | |
2021-02-19 | Correct cross reconstruction loss calculated in DataParallel | Jordan Gong | |
2021-02-19 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-19 | Allow evaluate unfinished model | Jordan Gong | |
2021-02-18 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-18 | Implement adjustable input size and change some default configs | Jordan Gong | |
2021-02-18 | Decode mean appearance feature | Jordan Gong | |
2021-02-16 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-16 | Split transform and evaluate method | Jordan Gong | |
2021-02-15 | Add DataParallel support on new codebase | Jordan Gong | |
2021-02-15 | Revert "Memory usage improvement" | Jordan Gong | |
This reverts commit be508061 | |||
2021-02-14 | Memory usage improvement | Jordan Gong | |
This update separates input data to two batches, which reduces ~30% memory usage. | |||
2021-02-14 | Prepare for DataParallel | Jordan Gong | |
2021-02-10 | Save scheduler state_dict | Jordan Gong | |
2021-02-09 | Improve performance when disentangling | Jordan Gong | |
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor. | |||
2021-02-09 | Some optimizations | Jordan Gong | |
1. Scheduler will decay the learning rate of auto-encoder only 2. Write learning rate history to tensorboard 3. Reduce image log frequency |