Go to file
2020-09-01 17:57:51 +02:00
.idea added simple reacher task 2020-08-28 18:31:06 +02:00
alr_envs removed EZPickle from SimpleReacher 2020-09-01 17:57:51 +02:00
reacher.egg-info added simple reacher task 2020-08-28 18:31:06 +02:00
__init__.py added simple reacher task 2020-08-28 18:31:06 +02:00
.gitignore added simple reacher task 2020-08-28 18:31:06 +02:00
example.py added simple reacher task 2020-08-28 18:31:06 +02:00
MUJOCO_LOG.TXT first commit: Mujoco Reacher 5 links 2020-08-28 15:48:34 +02:00
README.md Update README.md 2020-08-28 18:50:37 +02:00
setup.py fixed some issues with SimpleReacher rendering 2020-08-31 10:18:59 +02:00

ALR Custom Environments

This repository collects custom RL envs not included in Suits like OpenAI gym, rllab, etc. Creating a custom (Mujoco) gym environement can be done according to this guide: https://github.com/openai/gym/blob/master/docs/creating-environments.md

Environments

Currently we have the following environements:

Mujoco

Name Description
ALRReacher-v0 modification (5 links) of Mujoco Gym's Reacher (2 links)

Classic Control

Name Description
SimpleReacher-v0 Simple Reaching Task without any physics simulation. Returns no reward until 150 time steps. This allows the agent to explore the space, but requires precise actions towards the end of the trajectory.

INSTALL

  1. Clone the repository
git clone git@github.com:ALRhub/alr_envs.git
  1. Go to the folder
cd alr_envs
  1. Install with
pip install -e . 
  1. Use (see example.py):
import gym

env = gym.make('alr_envs:SimpleReacher-v0')
state = env.reset()

for i in range(10000):
    state, reward, done, info = env.step(env.action_space.sample())
    if i % 5 == 0:
        env.render()

    if done:
        state = env.reset()