summaryrefslogtreecommitdiff
path: root/models
AgeCommit message (Collapse)Author
2021-02-18Decode mean appearance featureJordan Gong
2021-02-16Split transform and evaluate methodJordan Gong
2021-02-15Revert "Memory usage improvement"Jordan Gong
This reverts commit be508061
2021-02-14Memory usage improvementJordan Gong
This update separates input data to two batches, which reduces ~30% memory usage.
2021-02-14Prepare for DataParallelJordan Gong
2021-02-10Save scheduler state_dictJordan Gong
2021-02-09Improve performance when disentanglingJordan Gong
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor.
2021-02-09Some optimizationsJordan Gong
1. Scheduler will decay the learning rate of auto-encoder only 2. Write learning rate history to tensorboard 3. Reduce image log frequency
2021-02-08Code refactoring, modifications and new featuresJordan Gong
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration
2021-01-23Remove the third term in canonical consistency lossJordan Gong
2021-01-23Add late start support for non-disentangling partsJordan Gong
2021-01-23Transform all frames together in evaluationJordan Gong
2021-01-23Evaluation bug fixes and code reviewJordan Gong
1. Return full cached clip in evaluation 2. Add multi-iter checkpoints support in evaluation 3. Remove duplicated code while transforming
2021-01-22Handle unexpected restore iterJordan Gong
1. Skip finished model before load it 2. Raise error when restore iter is greater than total iter
2021-01-21Print average losses after 100 itersJordan Gong
2021-01-21Bug fixesJordan Gong
1. Turn off autograd while decoding canonical and pose features 2. Change default batch size to (4, 8)
2021-01-14Enable optimizer fine tuningJordan Gong
2021-01-14Remove DataParallelJordan Gong
2021-01-13Update config file and convert int to str when joiningJordan Gong
2021-01-13Add multiple checkpoints for different model and set default config valueJordan Gong
2021-01-12Move the model to GPU before constructing optimizerJordan Gong
2021-01-12Some changes in hyperparameter configJordan Gong
1. Separate hyperparameter configs in model, optimizer and scheduler 2. Add more tunable hyperparameters in optimizer and scheduler
2021-01-12Some type hint fixesJordan Gong
2021-01-12Typo correct in evaluate functionJordan Gong
2021-01-11Add evaluation script, code review and fix some bugsJordan Gong
1. Add new `train_all` method for one shot calling 2. Print time used in 1k iterations 3. Correct label dimension in predict function 4. Transpose distance matrix for convenient indexing 5. Sort dictionary before generate signature 6. Extract visible CUDA setting function
2021-01-11Implement evaluatorJordan Gong
2021-01-10Make predict function transform samples different conditions in a single shotJordan Gong
2021-01-09Add prototype predict functionJordan Gong
2021-01-09Change auto-encoder input in evaluationJordan Gong
2021-01-07Train different models in different conditionsJordan Gong
2021-01-07Add typical training script and some bug fixesJordan Gong
1. Resolve deprecated scheduler stepping issue 2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x) 3. Add ReLU when compute distance in triplet loss 4. Remove classes except Model from `models` package init
2021-01-07Change device config and add enable multi-GPU computingJordan Gong
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator. 2. Enable multi-GPU training using torch.nn.DataParallel
2021-01-06Add CUDA supportJordan Gong
2021-01-06Add TensorBoard supportJordan Gong
2021-01-05Implement checkpoint mechanismJordan Gong
2021-01-05Implement Batch All Triplet LossJordan Gong
2021-01-05Change and improve weight initializationJordan Gong
1. Change initial weights for Conv layers 2. Find a way to init last fc in init_weights
2021-01-03Separate last fc matrix from weight init functionJordan Gong
Recursive apply will override other parameters too
2021-01-03Delete dead training judgeJordan Gong
2021-01-03Implement weight initializationJordan Gong
2021-01-03Update hyperparameter configuration, implement prototype fit functionJordan Gong
2021-01-03Add separate fully connected layersJordan Gong
2021-01-02Separate training and evaluatingJordan Gong
2021-01-02Correct feature dims after disentanglement and HPM backbone removalJordan Gong
1. Features used in HPM is decoded canonical embedding without transpose convolution 2. Decode pose embedding to image for Part Net 3. Backbone seems to be redundant, we can use feature map given by auto-decoder
2021-01-02Change type of pose similarity loss to tensorJordan Gong
2020-12-31Implement some parts of RGB-GaitPart wrapperJordan Gong
1. Triplet loss function and weight init function haven't been implement yet 2. Tuplize features returned by auto-encoder for later unpack 3. Correct comment error in auto-encoder 4. Swap batch_size dim and time dim in HPM and PartNet in case of redundant transpose 5. Find backbone problems in HPM and disable it temporarily 6. Make feature structure by HPM consistent to that by PartNet 7. Fix average pooling dimension issue and incorrect view change in HP
2020-12-31Make HPM capable of processing frames in all batchesJordan Gong
2020-12-31Make super class constructor revoke consistentJordan Gong
2020-12-31Bug Fixes in HPM and PartNetJordan Gong
1. Register list of torch.nn.Module to the network using torch.nn.ModuleList 2. Fix operation error in squeeze list of tensor 3. Replace squeeze with view in HP in case batch size is 1
2020-12-30Correct and refine PartNetJordan Gong
1. Let FocalConv block capable of processing frames in all batches 2. Correct input dims of TFA and output dims of HP 3. Change torch.unsqueeze and torch.cat to torch.stack