Age | Commit message (Collapse) | Author | |
---|---|---|---|
2021-02-19 | Correct cross reconstruction loss calculated in DataParallel | Jordan Gong | |
2021-02-19 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-19 | Allow evaluate unfinished model | Jordan Gong | |
2021-02-18 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-18 | Implement adjustable input size and change some default configs | Jordan Gong | |
2021-02-18 | Decode mean appearance feature | Jordan Gong | |
2021-02-16 | Merge branch 'master' into data_parallel | Jordan Gong | |
2021-02-16 | Split transform and evaluate method | Jordan Gong | |
2021-02-15 | Add DataParallel support on new codebase | Jordan Gong | |
2021-02-15 | Revert "Memory usage improvement" | Jordan Gong | |
This reverts commit be508061 | |||
2021-02-14 | Memory usage improvement | Jordan Gong | |
This update separates input data to two batches, which reduces ~30% memory usage. | |||
2021-02-14 | Prepare for DataParallel | Jordan Gong | |
2021-02-10 | Save scheduler state_dict | Jordan Gong | |
2021-02-09 | Improve performance when disentangling | Jordan Gong | |
This is a HUGE performance optimization, up to 2x faster than before. Mainly because of the replacement of randomized for-loop with randomized tensor. | |||
2021-02-09 | Some optimizations | Jordan Gong | |
1. Scheduler will decay the learning rate of auto-encoder only 2. Write learning rate history to tensorboard 3. Reduce image log frequency | |||
2021-02-08 | Code refactoring, modifications and new features | Jordan Gong | |
1. Decode features outside of auto-encoder 2. Turn off HPM 1x1 conv by default 3. Change canonical feature map size from `feature_channels * 8 x 4 x 2` to `feature_channels * 2 x 16 x 8` 4. Use mean of canonical embeddings instead of mean of static features 5. Calculate static and dynamic loss separately 6. Calculate mean of parts in triplet loss instead of sum of parts 7. Add switch to log disentangled images 8. Change default configuration | |||
2021-01-23 | Remove the third term in canonical consistency loss | Jordan Gong | |
2021-01-23 | Add late start support for non-disentangling parts | Jordan Gong | |
2021-01-23 | Evaluation bug fixes and code review | Jordan Gong | |
1. Return full cached clip in evaluation 2. Add multi-iter checkpoints support in evaluation 3. Remove duplicated code while transforming | |||
2021-01-22 | Handle unexpected restore iter | Jordan Gong | |
1. Skip finished model before load it 2. Raise error when restore iter is greater than total iter | |||
2021-01-21 | Print average losses after 100 iters | Jordan Gong | |
2021-01-14 | Enable optimizer fine tuning | Jordan Gong | |
2021-01-14 | Remove DataParallel | Jordan Gong | |
2021-01-13 | Update config file and convert int to str when joining | Jordan Gong | |
2021-01-13 | Add multiple checkpoints for different model and set default config value | Jordan Gong | |
2021-01-12 | Move the model to GPU before constructing optimizer | Jordan Gong | |
2021-01-12 | Some changes in hyperparameter config | Jordan Gong | |
1. Separate hyperparameter configs in model, optimizer and scheduler 2. Add more tunable hyperparameters in optimizer and scheduler | |||
2021-01-12 | Some type hint fixes | Jordan Gong | |
2021-01-12 | Typo correct in evaluate function | Jordan Gong | |
2021-01-11 | Add evaluation script, code review and fix some bugs | Jordan Gong | |
1. Add new `train_all` method for one shot calling 2. Print time used in 1k iterations 3. Correct label dimension in predict function 4. Transpose distance matrix for convenient indexing 5. Sort dictionary before generate signature 6. Extract visible CUDA setting function | |||
2021-01-11 | Implement evaluator | Jordan Gong | |
2021-01-10 | Make predict function transform samples different conditions in a single shot | Jordan Gong | |
2021-01-09 | Add prototype predict function | Jordan Gong | |
2021-01-07 | Train different models in different conditions | Jordan Gong | |
2021-01-07 | Add typical training script and some bug fixes | Jordan Gong | |
1. Resolve deprecated scheduler stepping issue 2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x) 3. Add ReLU when compute distance in triplet loss 4. Remove classes except Model from `models` package init | |||
2021-01-07 | Change device config and add enable multi-GPU computing | Jordan Gong | |
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator. 2. Enable multi-GPU training using torch.nn.DataParallel | |||
2021-01-06 | Add CUDA support | Jordan Gong | |
2021-01-06 | Add TensorBoard support | Jordan Gong | |
2021-01-05 | Implement checkpoint mechanism | Jordan Gong | |
2021-01-05 | Change and improve weight initialization | Jordan Gong | |
1. Change initial weights for Conv layers 2. Find a way to init last fc in init_weights | |||
2021-01-03 | Separate last fc matrix from weight init function | Jordan Gong | |
Recursive apply will override other parameters too | |||
2021-01-03 | Implement weight initialization | Jordan Gong | |
2021-01-03 | Update hyperparameter configuration, implement prototype fit function | Jordan Gong | |
2020-12-29 | Correct batch splitter | Jordan Gong | |
We can disentangle features from different subjects, but cannot do it at different temporal orders | |||
2020-12-27 | Implement some parts of main model structure | Jordan Gong | |
1. Configuration parsers 2. Model signature generator | |||
2020-12-27 | Adopt type hinting generics in standard collections (PEP 585) | Jordan Gong | |
2020-12-26 | Implement batch splitter to split sampled data | Jordan Gong | |
Disentanglement cannot be processed on different subjects at the same time, we need to load `pr` subjects one by one. The batch splitter will return a pr-length list of tuples (with 2 dicts containing k-length lists of labels, conditions, view and k-length tensor of clip data, representing condition 1 and condition 2 respectively). |