Go to file
Maximilian Huettenrauch a0692b1089 updates
2021-03-19 16:31:46 +01:00
alr_envs updates 2021-03-19 16:31:46 +01:00
reacher.egg-info added simple reacher task 2020-08-28 18:31:06 +02:00
__init__.py added simple reacher task 2020-08-28 18:31:06 +02:00
.gitignore removed pycharm .idea folder 2020-12-14 16:19:07 +01:00
dmp_env_wrapper_example.py updates 2021-03-19 16:31:46 +01:00
dmp_pd_control_example.py updates 2021-02-17 17:48:05 +01:00
example.py started table tennis env 2021-01-21 09:42:04 +01:00
MUJOCO_LOG.TXT first commit: Mujoco Reacher 5 links 2020-08-28 15:48:34 +02:00
README.md Update README.md 2020-12-07 11:28:06 +01:00
setup.py updates 2021-01-14 17:10:03 +01:00

ALR Custom Environments

This repository collects custom RL envs not included in Suits like OpenAI gym, rllab, etc. Creating a custom (Mujoco) gym environement can be done according to this guide. For stochastic search problems with gym interace use the Rosenbrock reference implementation.

Environments

Currently we have the following environements:

Mujoco

Name Description
ALRReacher-v0 Modified (5 links) Mujoco gym's Reacher-v2 (2 links)
ALRReacherSparse-v0 Same as ALRReacher-v0, but the distance penalty is only provided in the last time step.
ALRReacherSparseBalanced-v0 Same as ALRReacherSparse-v0, but the the end effector has to stay upright.
ALRReacherShort-v0 Same as ALRReacher-v0, but the episode length is reduced to 50.
ALRReacherShortSparse-v0 Combination of ALRReacherSparse-v0 and ALRReacherShort-v0.
ALRReacher7-v0 Modified (7 links) Mujoco gym's Reacher-v2 (2 links)
ALRReacher7Sparse-v0 Same as ALRReacher7-v0, but the distance penalty is only provided in the last time step.

Classic Control

Name Description
SimpleReacher-v0 Simple Reaching Task without any physics simulation. Returns no reward until 150 time steps. This allows the agent to explore the space, but requires precise actions towards the end of the trajectory.
Name Description
Rosenbrock{dim}-v0 Gym interface for Rosenbrock function. {dim} is one of 5, 10, 25, 50 or 100.

Install

  1. Clone the repository
git clone git@github.com:ALRhub/alr_envs.git
  1. Go to the folder
cd alr_envs
  1. Install with
pip install -e . 
  1. Use (see example.py):
import gym

env = gym.make('alr_envs:SimpleReacher-v0')
state = env.reset()

for i in range(10000):
    state, reward, done, info = env.step(env.action_space.sample())
    if i % 5 == 0:
        env.render()

    if done:
        state = env.reset()