Commit Graph

11 Commits

Author SHA1 Message Date
Younggyo Seo
51c55d4a8a
Support Multi-GPU Training (#22)
- Change in isaaclab_env wrapper to explicitly state GPU for each simulation
- Removing jax cache to support multi-gpu environment launch in MuJoCo Playground
- Removing .train() and .eval() in evaluation and rendering to avoid deadlock in multi-gpu training
- Supporting synchronous normalization for multi-gpu training
2025-07-07 10:24:42 -07:00
Younggyo Seo
83907422a3
Improved AMP/torch.compile compatibility of SimbaV2 (#21) 2025-07-07 10:04:46 -07:00
Younggyo Seo
c354ead107
Optimized codebase to speed up training (#20)
- Modified codes to be compatible with torch.compile
- Modified empirical normalizer to use in-place operator to avoid costly __setattr__
- Parallel soft Q-update
- As a default option, we disabled gradient norm clipping as it is quite expensive
2025-07-02 19:39:02 -07:00
Younggyo Seo
799624b202
Bug fix -- MTBench evaluation and missing code (#18)
This PR includes these changes:
- Fixing a bug in MTBench evaluation
- Add a missing `critic_cls` in `train.py` (resolving an issue https://github.com/younggyoseo/FastTD3/issues/17)
- Updating hyperparameters for MTBench
2025-06-25 09:21:04 -07:00
Younggyo Seo
cef44108d8
Support MTBench (#15)
This PR incorporates MTBench into the current codebase, as a good demonstration that shows how to use FastTD3 for multi-task setup.

- Add support for MTBench along with its wrapper
- Add support for per-task reward normalizer useful for multi-task RL, motivated by BRC paper (https://arxiv.org/abs/2505.23150v1)
2025-06-20 21:52:43 -07:00
Younggyo Seo
6e890eebd2
Support FastTD3 + SimbaV2 (#13)
- Support hyperspherical normalization
- Support loading FastTD3 + SimbaV2 for both training and inference
- Support (experimental) reward normalization that uses SimbaV2's formulation -- not working that well though
- Updated README for FastTD3 + SimbaV2
2025-06-15 12:49:59 -07:00
Younggyo Seo
1014bf7e82 [hotfix] fix issue when using n-step==1 2025-06-10 08:26:27 +00:00
Younggyo Seo
85cb1c65c7
Fix replay buffer issues when n_steps > 1 (#7)
- Fix an issue where the n-step reward is not properly computed for end-of-episode transitions when using n_step > 1.
- Fix an issue where the observation and next_observations are sampled across different episodes when using n_step > 1 and the buffer is full
- Fix an issue where the discount is not properly computed when n_step > 1
2025-06-07 01:20:48 -04:00
Younggyo Seo
c156ba93fb black formatting and update tuned_reward for T1 2025-05-29 08:29:44 +00:00
Younggyo Seo
5725eba3b8 memory optimization for playground 2025-05-29 06:58:28 +00:00
Younggyo Seo
258bfe67dd Initial Public Release 2025-05-29 01:49:23 +00:00