Age | Commit message (Collapse) | Author | |
---|---|---|---|
2021-03-25 | Bug fixes and refactoring | Jordan Gong | |
1. Correct trained model signature 2. Move `val_size` to system config | |||
2021-03-22 | Add embedding visualization and validate on testing set | Jordan Gong | |
2021-03-14 | Fix unbalanced datasets | Jordan Gong | |
2021-03-12 | Code refactoring | Jordan Gong | |
1. Separate FCs and triplet losses for HPM and PartNet 2. Remove FC-equivalent 1x1 conv layers in HPM 3. Support adjustable learning rate schedulers | |||
2021-03-10 | Bug fixes | Jordan Gong | |
1. Resolve reference problems when parsing dataset selectors 2. Transform gallery using different models | |||
2021-03-01 | New scheduler and new config | Jordan Gong | |
2021-03-01 | Change flat distance calculation method | Jordan Gong | |
2021-03-01 | Remove identical sample in Batch All case | Jordan Gong | |
2021-02-28 | Implement sum of loss default in [1] | Jordan Gong | |
[1]A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017. | |||
2021-02-28 | Log n-ile embedding distance and norm | Jordan Gong | |
2021-02-27 | Implement Batch Hard triplet loss and soft margin | Jordan Gong | |
2021-02-20 | Separate triplet loss from model | Jordan Gong | |
2021-02-14 | Prepare for DataParallel | Jordan Gong | |
2021-02-10 | Implement new sampling technique mentioned in GaitPart[1] | Jordan Gong | |
[1]C. Fan et al., “GaitPart: Temporal Part-Based Model for Gait Recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14225–14233. | |||
2021-02-08 | Code refactoring, modifications and new features | Jordan Gong | |
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration | |||
2021-01-23 | Add late start support for non-disentangling parts | Jordan Gong | |
2021-01-23 | Evaluation bug fixes and code review | Jordan Gong | |
1. Return full cached clip in evaluation 2. Add multi-iter checkpoints support in evaluation 3. Remove duplicated code while transforming | |||
2021-01-14 | Enable optimizer fine tuning | Jordan Gong | |
2021-01-13 | Add multiple checkpoints for different model and set default config value | Jordan Gong | |
2021-01-12 | Some changes in hyperparameter config | Jordan Gong | |
1. Separate hyperparameter configs in model, optimizer and scheduler 2. Add more tunable hyperparameters in optimizer and scheduler | |||
2021-01-12 | Some type hint fixes | Jordan Gong | |
2021-01-11 | Add evaluation script, code review and fix some bugs | Jordan Gong | |
1. Add new `train_all` method for one shot calling 2. Print time used in 1k iterations 3. Correct label dimension in predict function 4. Transpose distance matrix for convenient indexing 5. Sort dictionary before generate signature 6. Extract visible CUDA setting function | |||
2021-01-11 | Implement evaluator | Jordan Gong | |
2021-01-09 | Fix NaN when separate sum is zero | Jordan Gong | |
2021-01-07 | Add typical training script and some bug fixes | Jordan Gong | |
1. Resolve deprecated scheduler stepping issue 2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x) 3. Add ReLU when compute distance in triplet loss 4. Remove classes except Model from `models` package init | |||
2021-01-07 | Change device config and add enable multi-GPU computing | Jordan Gong | |
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator. 2. Enable multi-GPU training using torch.nn.DataParallel | |||
2021-01-05 | Implement Batch All Triplet Loss | Jordan Gong | |
2021-01-03 | Update hyperparameter configuration, implement prototype fit function | Jordan Gong | |
2020-12-31 | Make super class constructor revoke consistent | Jordan Gong | |
2020-12-29 | Add type hint for new label (numpy.int64) | Jordan Gong | |
2020-12-29 | Encode class names to label and some access improvement | Jordan Gong | |
1. Encode class names using LabelEncoder from sklearn 2. Remove unneeded class variables 3. Protect some variables from being accessed in userspace | |||
2020-12-27 | Prepare for FVG dataset | Jordan Gong | |
2020-12-27 | Make naming scheme consistent | Jordan Gong | |
Use `dir` instead of `path` | |||
2020-12-27 | Add dataset selector to config type hint, change ClipLabels typo to ClipViews | Jordan Gong | |
2020-12-27 | Adopt type hinting generics in standard collections (PEP 585) | Jordan Gong | |
2020-12-26 | Sample k more clips for disentanglement | Jordan Gong | |
2020-12-26 | Add config file and corresponding type hint | Jordan Gong | |
2020-12-26 | Combine transformed height and width to `frame_size` | Jordan Gong | |
2020-12-21 | Change image loading technique | Jordan Gong | |
1. Use Pillow.Image.open instead of torchvision.io.read_image to read image 2. Transforming PIL images instead of tensors which performs better and device option is removed 3. Images are normalized now | |||
2020-12-19 | 1. Delete unused transform function | Jordan Gong | |
2. Reorganize the initialization cache dicts | |||
2020-12-19 | Fix indexing error when no clip to be discard | Jordan Gong | |
2020-12-19 | Add cache switch, allowing load all data into RAM before sampling | Jordan Gong | |
2020-12-18 | Implement triplet sampler | Jordan Gong | |
2020-12-18 | Implement CASIA-B dataset | Jordan Gong | |