235 lines
9.4 KiB
Markdown
235 lines
9.4 KiB
Markdown
<h1 align="center">
|
|
<br>
|
|
<img src='./icon.svg' width="250px">
|
|
<br><br>
|
|
<b>Fancy Gym</b>
|
|
<br>
|
|
<br>
|
|
</h1>
|
|
|
|
`fancy_gym` offers a large variety of reinforcement learning environments under the unifying interface of [Gymnasium](https://gymnasium.farama.org/).
|
|
|
|
We provide support (under the Gymnasium interface) for the benchmark suites [DeepMind Control](https://deepmind.com/research/publications/2020/dm-control-Software-and-Tasks-for-Continuous-Control) (DMC) and [Metaworld](https://meta-world.github.io/). If those are not sufficient and you want to create your own custom gym environments, use [this guide] https://www.gymlibrary.dev/content/environment_creation/). We highly appreciate it, if you would then submit a PR for this environment to become part of `fancy_gym`.
|
|
|
|
In comparison to existing libraries, we additionally support to control agents with movement primitives, such as Dynamic Movement Primitives (DMPs) and Probabilistic Movement Primitives (ProMP).
|
|
|
|
## Movement Primitive Environments (Episode-Based/Black-Box Environments)
|
|
|
|
Unlike step-based environments, movement primitive (MP) environments are closer related to stochastic search, black-box
|
|
optimization, and methods that are often used in traditional robotics and control. MP environments are typically
|
|
episode-based and execute a full trajectory, which is generated by a trajectory generator, such as a Dynamic Movement
|
|
Primitive (DMP) or a Probabilistic Movement Primitive (ProMP). The generated trajectory is translated into individual
|
|
step-wise actions by a trajectory tracking controller. The exact choice of controller is, however, dependent on the type
|
|
of environment. We currently support position, velocity, and PD-Controllers for position, velocity, and torque control,
|
|
respectively as well as a special controller for the MetaWorld control suite.
|
|
The goal of all MP environments is still to learn an optimal policy. Yet, an action represents the parametrization of
|
|
the motion primitives to generate a suitable trajectory. Additionally, in this framework we support all of this also for
|
|
the contextual setting, i.e. we expose the context space - a subset of the observation space - in the beginning of the
|
|
episode. This requires to predict a new action/MP parametrization for each context.
|
|
|
|
## Installation
|
|
|
|
1. Clone the repository
|
|
|
|
```bash
|
|
git clone git@github.com:ALRhub/fancy_gym.git
|
|
```
|
|
|
|
2. Go to the folder
|
|
|
|
```bash
|
|
cd fancy_gym
|
|
```
|
|
|
|
3. Install with
|
|
|
|
```bash
|
|
pip install -e .
|
|
```
|
|
|
|
In case you want to use dm_control oder metaworld, you can install them by specifying extras
|
|
|
|
```bash
|
|
pip install -e .[dmc,metaworld]
|
|
```
|
|
|
|
> **Note:**
|
|
> While our library already fully supports the new mujoco bindings, metaworld still relies on
|
|
> [mujoco_py](https://github.com/openai/mujoco-py), hence make sure to have mujoco 2.1 installed beforehand.
|
|
|
|
## How to use Fancy Gym
|
|
|
|
We will only show the basics here and prepared [multiple examples](fancy_gym/examples/) for a more detailed look.
|
|
|
|
### Step-wise Environments
|
|
|
|
```python
|
|
import fancy_gym
|
|
|
|
env = fancy_gym.make('Reacher5d-v0', seed=1)
|
|
obs = env.reset()
|
|
|
|
for i in range(1000):
|
|
action = env.action_space.sample()
|
|
obs, reward, done, info = env.step(action)
|
|
if i % 5 == 0:
|
|
env.render()
|
|
|
|
if done:
|
|
obs = env.reset()
|
|
```
|
|
|
|
When using `dm_control` tasks we expect the `env_id` to be specified as `dmc:domain_name-task_name` or for manipulation
|
|
tasks as `dmc:manipulation-environment_name`. For `metaworld` tasks, we require the structure `metaworld:env_id-v2`, our
|
|
custom tasks and standard gym environments can be created without prefixes.
|
|
|
|
### Black-box Environments
|
|
|
|
All environments provide by default the cumulative episode reward, this can however be changed if necessary. Optionally,
|
|
each environment returns all collected information from each step as part of the infos. This information is, however,
|
|
mainly meant for debugging as well as logging and not for training.
|
|
|
|
|Key| Description|Type
|
|
|---|---|---|
|
|
`positions`| Generated trajectory from MP | Optional
|
|
`velocities`| Generated trajectory from MP | Optional
|
|
`step_actions`| Step-wise executed action based on controller output | Optional
|
|
`step_observations`| Step-wise intermediate observations | Optional
|
|
`step_rewards`| Step-wise rewards | Optional
|
|
`trajectory_length`| Total number of environment interactions | Always
|
|
`other`| All other information from the underlying environment are returned as a list with length `trajectory_length` maintaining the original key. In case some information are not provided every time step, the missing values are filled with `None`. | Always
|
|
|
|
Existing MP tasks can be created the same way as above. Just keep in mind, calling `step()` executes a full trajectory.
|
|
|
|
> **Note:**
|
|
> Currently, we are also in the process of enabling replanning as well as learning of sub-trajectories.
|
|
> This allows to split the episode into multiple trajectories and is a hybrid setting between step-based and
|
|
> black-box leaning.
|
|
> While this is already implemented, it is still in beta and requires further testing.
|
|
> Feel free to try it and open an issue with any problems that occur.
|
|
|
|
```python
|
|
import fancy_gym
|
|
|
|
env = fancy_gym.make('Reacher5dProMP-v0', seed=1)
|
|
# render() can be called once in the beginning with all necessary arguments.
|
|
# To turn it of again just call render() without any arguments.
|
|
env.render(mode='human')
|
|
|
|
# This returns the context information, not the full state observation
|
|
obs = env.reset()
|
|
|
|
for i in range(5):
|
|
action = env.action_space.sample()
|
|
obs, reward, done, info = env.step(action)
|
|
|
|
# Done is always True as we are working on the episode level, hence we always reset()
|
|
obs = env.reset()
|
|
```
|
|
|
|
To show all available environments, we provide some additional convenience variables. All of them return a dictionary
|
|
with two keys `DMP` and `ProMP` that store a list of available environment ids.
|
|
|
|
```python
|
|
import fancy_gym
|
|
|
|
print("Fancy Black-box tasks:")
|
|
print(fancy_gym.ALL_FANCY_MOVEMENT_PRIMITIVE_ENVIRONMENTS)
|
|
|
|
print("OpenAI Gym Black-box tasks:")
|
|
print(fancy_gym.ALL_GYM_MOVEMENT_PRIMITIVE_ENVIRONMENTS)
|
|
|
|
print("Deepmind Control Black-box tasks:")
|
|
print(fancy_gym.ALL_DMC_MOVEMENT_PRIMITIVE_ENVIRONMENTS)
|
|
|
|
print("MetaWorld Black-box tasks:")
|
|
print(fancy_gym.ALL_METAWORLD_MOVEMENT_PRIMITIVE_ENVIRONMENTS)
|
|
```
|
|
|
|
### How to create a new MP task
|
|
|
|
In case a required task is not supported yet in the MP framework, it can be created relatively easy. For the task at
|
|
hand, the following [interface](fancy_gym/black_box/raw_interface_wrapper.py) needs to be implemented.
|
|
|
|
```python
|
|
from abc import abstractmethod
|
|
from typing import Union, Tuple
|
|
|
|
import gym
|
|
import numpy as np
|
|
|
|
|
|
class RawInterfaceWrapper(gym.Wrapper):
|
|
|
|
@property
|
|
def context_mask(self) -> np.ndarray:
|
|
"""
|
|
Returns boolean mask of the same shape as the observation space.
|
|
It determines whether the observation is returned for the contextual case or not.
|
|
This effectively allows to filter unwanted or unnecessary observations from the full step-based case.
|
|
E.g. Velocities starting at 0 are only changing after the first action. Given we only receive the
|
|
context/part of the first observation, the velocities are not necessary in the observation for the task.
|
|
Returns:
|
|
bool array representing the indices of the observations
|
|
|
|
"""
|
|
return np.ones(self.env.observation_space.shape[0], dtype=bool)
|
|
|
|
@property
|
|
@abstractmethod
|
|
def current_pos(self) -> Union[float, int, np.ndarray, Tuple]:
|
|
"""
|
|
Returns the current position of the action/control dimension.
|
|
The dimensionality has to match the action/control dimension.
|
|
This is not required when exclusively using velocity control,
|
|
it should, however, be implemented regardless.
|
|
E.g. The joint positions that are directly or indirectly controlled by the action.
|
|
"""
|
|
raise NotImplementedError()
|
|
|
|
@property
|
|
@abstractmethod
|
|
def current_vel(self) -> Union[float, int, np.ndarray, Tuple]:
|
|
"""
|
|
Returns the current velocity of the action/control dimension.
|
|
The dimensionality has to match the action/control dimension.
|
|
This is not required when exclusively using position control,
|
|
it should, however, be implemented regardless.
|
|
E.g. The joint velocities that are directly or indirectly controlled by the action.
|
|
"""
|
|
raise NotImplementedError()
|
|
|
|
```
|
|
|
|
If you created a new task wrapper, feel free to open a PR, so we can integrate it for others to use as well. Without the
|
|
integration the task can still be used. A rough outline can be shown here, for more details we recommend having a look
|
|
at the [examples](fancy_gym/examples/).
|
|
|
|
```python
|
|
import fancy_gym
|
|
|
|
# Base environment name, according to structure of above example
|
|
base_env_id = "dmc:ball_in_cup-catch"
|
|
|
|
# Replace this wrapper with the custom wrapper for your environment by inheriting from the RawInferfaceWrapper.
|
|
# You can also add other gym.Wrappers in case they are needed,
|
|
# e.g. gym.wrappers.FlattenObservation for dict observations
|
|
wrappers = [fancy_gym.dmc.suite.ball_in_cup.MPWrapper]
|
|
kwargs = {...}
|
|
env = fancy_gym.make_bb(base_env_id, wrappers=wrappers, seed=0, **kwargs)
|
|
|
|
rewards = 0
|
|
obs = env.reset()
|
|
|
|
# number of samples/full trajectories (multiple environment steps)
|
|
for i in range(5):
|
|
ac = env.action_space.sample()
|
|
obs, reward, done, info = env.step(ac)
|
|
rewards += reward
|
|
|
|
if done:
|
|
print(base_env_id, rewards)
|
|
rewards = 0
|
|
obs = env.reset()
|
|
```
|