fancy_gym/alr_envs/alr/classic_control
2022-07-01 11:42:42 +02:00
..
base_reacher call on superclass for obs wrapper 2022-06-30 14:20:52 +02:00
hole_reacher finish up beerpong, walker2d and ant needs more extensions, fix import bugs. 2022-07-01 09:54:42 +02:00
simple_reacher current state 2022-06-30 17:33:05 +02:00
viapoint_reacher Fix bugs to create mp environments. Still conflicts with mp_pytorch_lib 2022-07-01 11:42:42 +02:00
__init__.py removed old holereacher env 2021-12-01 14:55:27 +01:00
README.MD updated table tennis and beerpong for promp usage 2021-12-06 13:43:45 +01:00
utils.py wip 2021-12-01 14:55:27 +01:00

Classic Control

Step-based Environments

Name Description Horizon Action Dimension Observation Dimension
SimpleReacher-v0 Simple reaching task (2 links) without any physics simulation. Provides no reward until 150 time steps. This allows the agent to explore the space, but requires precise actions towards the end of the trajectory. 200 2 9
LongSimpleReacher-v0 Simple reaching task (5 links) without any physics simulation. Provides no reward until 150 time steps. This allows the agent to explore the space, but requires precise actions towards the end of the trajectory. 200 5 18
ViaPointReacher-v0 Simple reaching task leveraging a via point, which supports self collision detection. Provides a reward only at 100 and 199 for reaching the viapoint and goal point, respectively. 200 5 18
HoleReacher-v0 5 link reaching task where the end-effector needs to reach into a narrow hole without collding with itself or walls 200 5 18

MP Environments

Name Description Horizon Action Dimension Context Dimension
ViaPointReacherDMP-v0 A DMP provides a trajectory for the ViaPointReacher-v0 task. 200 25
HoleReacherFixedGoalDMP-v0 A DMP provides a trajectory for the HoleReacher-v0 task with a fixed goal attractor. 200 25
HoleReacherDMP-v0 A DMP provides a trajectory for the HoleReacher-v0 task. The goal attractor needs to be learned. 200 30