Age | Commit message (Collapse) | Author |
|
|
|
1. Separate hyperparameter configs in model, optimizer and scheduler
2. Add more tunable hyperparameters in optimizer and scheduler
|
|
|
|
|
|
1. Add new `train_all` method for one shot calling
2. Print time used in 1k iterations
3. Correct label dimension in predict function
4. Transpose distance matrix for convenient indexing
5. Sort dictionary before generate signature
6. Extract visible CUDA setting function
|
|
|
|
|
|
|
|
|
|
1. Resolve deprecated scheduler stepping issue
2. Make losses in the same scale(replace mean with sum in separate triplet loss, enlarge pose similarity loss 10x)
3. Add ReLU when compute distance in triplet loss
4. Remove classes except Model from `models` package init
|
|
1. Add `disable_acc` switch for disabling accelerator. When it is off, system will automatically choosing accelerator.
2. Enable multi-GPU training using torch.nn.DataParallel
|
|
|
|
|
|
|
|
1. Change initial weights for Conv layers
2. Find a way to init last fc in init_weights
|
|
Recursive apply will override other parameters too
|
|
|
|
|
|
We can disentangle features from different subjects, but cannot do it at different temporal orders
|
|
1. Configuration parsers
2. Model signature generator
|
|
|
|
Disentanglement cannot be processed on different subjects at the same time, we need to load `pr` subjects one by one. The batch splitter will return a pr-length list of tuples (with 2 dicts containing k-length lists of labels, conditions, view and k-length tensor of clip data, representing condition 1 and condition 2 respectively).
|