summaryrefslogtreecommitdiff
path: root/utils
AgeCommit message (Collapse)Author
2021-04-07Revert cross-reconstruction loss factor and make image log steps adjustableJordan Gong
2021-03-25Bug fixes and refactoringJordan Gong
1. Correct trained model signature 2. Move `val_size` to system config
2021-03-22Add embedding visualization and validate on testing setJordan Gong
2021-03-14Fix unbalanced datasetsJordan Gong
2021-03-12Code refactoringJordan Gong
1. Separate FCs and triplet losses for HPM and PartNet 2. Remove FC-equivalent 1x1 conv layers in HPM 3. Support adjustable learning rate schedulers
2021-03-10Bug fixesJordan Gong
1. Resolve reference problems when parsing dataset selectors 2. Transform gallery using different models
2021-03-01New scheduler and new configJordan Gong
2021-03-01Change flat distance calculation methodJordan Gong
2021-03-01Remove identical sample in Batch All caseJordan Gong
2021-02-28Implement sum of loss default in [1]Jordan Gong
[1]A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017.
2021-02-28Log n-ile embedding distance and normJordan Gong
2021-02-27Implement Batch Hard triplet loss and soft marginJordan Gong
2021-02-20Separate triplet loss from modelJordan Gong
2021-02-14Prepare for DataParallelJordan Gong
2021-02-10Implement new sampling technique mentioned in GaitPart[1]Jordan Gong
[1]C. Fan et al., “GaitPart: Temporal Part-Based Model for Gait Recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14225–14233.
2021-02-08Code refactoring, modifications and new featuresJordan Gong
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration
2021-01-23Add late start support for non-disentangling partsJordan Gong
2021-01-23Evaluation bug fixes and code reviewJordan Gong
1. Return full cached clip in evaluation 2. Add multi-iter checkpoints support in evaluation 3. Remove duplicated code while transforming
2021-01-14Enable optimizer fine tuningJordan Gong
2021-01-13Add multiple checkpoints for different model and set default config valueJordan Gong
2021-01-12Some changes in hyperparameter configJordan Gong
1. Separate hyperparameter configs in model, optimizer and scheduler 2. Add more tunable hyperparameters in optimizer and scheduler
2021-01-12Some type hint fixesJordan Gong
2021-01-11Add evaluation script, code review and fix some bugsJordan Gong
1. Add new `train_all` method for one shot calling 2. Print time used in 1k iterations 3. Correct label dimension in predict function 4. Transpose distance matrix for convenient indexing 5. Sort dictionary before generate signature 6. Extract visible CUDA setting function
2021-01-11Implement evaluatorJordan Gong
2021-01-09Fix NaN when separate sum is zeroJordan Gong
2021-01-07Add typical training script and some bug fixesJordan Gong
1. Resolve deprecated scheduler stepping issue 2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x) 3. Add ReLU when compute distance in triplet loss 4. Remove classes except Model from `models` package init
2021-01-07Change device config and add enable multi-GPU computingJordan Gong
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator. 2. Enable multi-GPU training using torch.nn.DataParallel
2021-01-05Implement Batch All Triplet LossJordan Gong
2021-01-03Update hyperparameter configuration, implement prototype fit functionJordan Gong
2020-12-31Make super class constructor revoke consistentJordan Gong
2020-12-29Add type hint for new label (numpy.int64)Jordan Gong
2020-12-29Encode class names to label and some access improvementJordan Gong
1. Encode class names using LabelEncoder from sklearn 2. Remove unneeded class variables 3. Protect some variables from being accessed in userspace
2020-12-27Prepare for FVG datasetJordan Gong
2020-12-27Make naming scheme consistentJordan Gong
Use `dir` instead of `path`
2020-12-27Add dataset selector to config type hint, change ClipLabels typo to ClipViewsJordan Gong
2020-12-27Adopt type hinting generics in standard collections (PEP 585)Jordan Gong
2020-12-26Sample k more clips for disentanglementJordan Gong
2020-12-26Add config file and corresponding type hintJordan Gong
2020-12-26Combine transformed height and width to `frame_size`Jordan Gong
2020-12-21Change image loading techniqueJordan Gong
1. Use Pillow.Image.open instead of torchvision.io.read_image to read image 2. Transforming PIL images instead of tensors which performs better and device option is removed 3. Images are normalized now
2020-12-191. Delete unused transform functionJordan Gong
2. Reorganize the initialization cache dicts
2020-12-19Fix indexing error when no clip to be discardJordan Gong
2020-12-19Add cache switch, allowing load all data into RAM before samplingJordan Gong
2020-12-18Implement triplet samplerJordan Gong
2020-12-18Implement CASIA-B datasetJordan Gong