Release Notes

v1.3.1: 1.3.1

Released on 2024-10-14 - GitHub - PyPI

Release Notes:

A few bug fixes and fixes the internal testing

Bug Fixes:

Minor Changes:

Documentation Updates:

New Contributors

Full Changelog: v1.3.0...v1.3.1

v1.3.0: v1.3

Released on 2024-10-08 - GitHub - PyPI

Release Notes:

1.3 Is a Major Release, adding new versions of the environments and supporting gymnasium==1.0.0

New Environments:

MaMuJoCo-V1

  • Now based on Gymnasium/MuJoCo-v5 instead of Gymnasium/MuJoCo-v4 (Farama-Foundation/Gymnasium#572).
  • When factorizatoion=None, the env.gent_action_partitions.dummy_node now contains action_id (it used to be None).
  • Added map_local_observations_to_global_state & optimized runtime performance of map_global_state_to_local_observations.
  • Added gym_env argument, which can be used to load third-party Gymansium.MujocoEnv environments.

Ant

  • Now observes local_categories of cfrc_ext by default (same as Gymnasium/MuJoCo-v5/Ant).
  • Renamed global node torsoroot.

Humanoid(-Standup)

  • No longer observes qfrc_actuator of root & cinert, cvel, qfrc_actuator, cfrc_ext of worldbody (same as Gymnasium/MuJoCo-v5/Humanoid(-Standup)).

Walker2d

  • Fixed bug: global nodes are now [root_x, root_z, root_y] (used to be [root_x, root_x, root_z]).

ManySegmentAnt

  • frame_skip default set to 5 (same as Gymnasium/Ant).
  • option.timestep set to 0.01 (same as Gymnasium/Ant).
  • Now uses the same reward function as Gymnasium/Ant.
  • Now observes cfrc_ext by default, (same as Gymnasium/MuJoCo-v5).

ManySegmentSwimmer

  • Now uses the same option.timestep as Gymansum/Swimmer (0.01).
  • Updated model to work with mujoco>=3.0.0.

Breaking changes:

Bug Fixes:

Dependency Updates:

Minor Changes:

Documentation Updates:

New Contributors

Full Changelog: v1.2.4...v1.3.0

v1.2.4

Released on 2023-12-24 - GitHub - PyPI

Release Notes:

A minor release bringing a bug fix and registering and pre-existing environment. This is the first release since @Kallinteris-Andreas became the project manager.

What's Changed

New Features:

Bug Fixes:

Dependency Updates:

Minor Changes:

Documentation Updates:

  • update observation space docstring: obs[6:8] is actually block - gripper for fetch pick_and_place push slide environments by @SethPate in #197
  • remove py3.7 from installation.md documentation by @Kallinteris-Andreas in #199

Full Changelog: v1.2.3...v1.2.4

v1.2.3

Released on 2023-09-18 - GitHub - PyPI

Gymnasium-Robotics v1.2.3 Release Notes:

Breaking changes:

  • Drop support for Python 3.7 which has reached its end of life. (#159)
  • New v4 version for the AntMaze environments that fix the following issue #155. (#156)

Bug Fixes:

  • Allow to compute rewards from batched observations in maze environments (PointMaze/AntMaze) (#153, #158)
  • Bump AntMaze environments version to v4 which fix issue #155. The following new files have been added to the source code: ant_maze_v4.py and maze_v4.py. (#156). The fixes involve:
    • When the environment is initialized with continuing_task=True, the reward is now calculated before resetting the goal location. Previously the reward was always zero whether the ant reached the goal or not during the full episode.
    • Fix the ant agent being reset into a terminal state. The maze_size_scaling factor was missing in the distance check in MazeEnv.generate_reset_pos().
    • Add success item to info return. info["success"].
  • Fix goal_cell and reset_cell assertions when reset maze environments (#164, #179)
  • Fix issue #166 in FrankaKitchen environment. info["tasks_to_complete"] was not giving the correct values. (#169)

New Features

  • Add reset_target boolean argument for initializing maze environments. If reset_target=True and continuing_task=True, the goal will be automatically placed in a new location when the agent reaches it in the same episode. If reset_target=False and continuing_task=True, the goal location won't be updated when reached by the agent and reward will be accumulated as long as the agent stays around the goal threshold. (#167, #170)
  • For maze environments, if the goal and reset cell locations are not given in the maze map structuree, they will be chosen automatically among empty cells. (#170)

Dependency Updates

  • Remove restrictions on numpy version, numpy>=1.21.0 (#154)
  • Remove restrictions on mujoco version, mujoco>=2.3.3 (#171)
  • Restrict cython version to cython<3 due to the following issue Farama-Foundation/Gymnasium#616, (#162)

Documentation Updates

  • Replace main logo svg format with png (#160)
  • Update sphinx to latest version (#157)
  • Add release notes changelog (#174)
  • Remove versioning for included environments in documentation and update gifs for maze environments (#172, #177)
  • Fix table format for Shadow Dexterous Hand - Reach environment (#178)

Full Changelog: v1.2.2...v1.2.3

v1.2.2

Released on 2023-05-17 - GitHub - PyPI

Release Notes

This minor release updates MaMuJoCo to follow the latest PettingZoo version 1.23.0 and some minor bug fixes in the Github PyPI publish workflow.

New Features

Bug Fix

Documentation Updates

Full Changelog: v1.2.1...v1.2.2

v1.2.1

Released on 2023-05-16 - GitHub - PyPI

Gymnasium-Robotics 1.2.1 Release Notes:

This minor release adds new Multi-agent environments from the MaMuJoCo project. These environments have been updated to follow the PettingZoo API and use the latest mujoco bindings. In addition, the updates made for the first release of FrankaKitchen-v1 environment have been reverted in order for the environment to resemble more its original version in relay-policy-learning and D4RL. This will solve existing confusion with the action space (#135) and facilitate the re-creation of datasets in Minari.

We are also pining the mujoco version to v2.3.3 until we address the following issue (google-deepmind/mujoco#833).

Breaking Changes

  • Revert FrankaKitchen-v1 environment to original. @rodrigodelazcano in #145. These changes involve:
    • robot model: use the Franka robot model of the original environment instead of the model provided in mujoco_menagerie
    • action space: remove the Inverse Kinematic control option and maintain a single action space, the original joint velocity control.
    • goal tasks: some task have been removed which were not present in the original environment (top_right_burner and bottom_right_burner). Also the tasks name now match the original naming.

New Features

Bug Fixes

  • Add missing underscore to fix rendering by @frankroeder in #102
  • Correct point_obs slicing for achieved_goal in PointMaze environments by @dohmjan in #105
  • Update the position of the goal every reset in AntMaze environment by @nicehiro in #106
  • Correct FetchReach environment versioning from v3 to v2 by @aalmuzairee in #121
  • Fix issue #128 . Use jnt_doafdr instead of nt_qposadr for the mujoco_utils.get_joint_qvel() utility function. by @rodrigodelazcano in #129
  • Correct x, y scaling for Maze environments @rodrigodelazcano in #110
  • Fix door state space key by @rodrigodelazcano in #130
  • Make getter functions for qpos / qvel return copies by @hueds in #136

Minor Changes

Documentation

Full Changelog: v1.2.0...v1.2.1

v1.2.0: Version 1.2.0

Released on 2023-01-09 - GitHub - PyPI

Finally here! 🥳 🤖

Refactored versions of the D4RL MuJoCo environments are now available in Gymnasium-Robotics (PointMaze, AntMaze, AdroitHand, and FrankaKitchen). The configuration of these environments is not identically to the originals, please read the new details in the documentation webpage at https://robotics.farama.org/

Moving forward, we are recreating the offline datasets with Minari and evaluating the environments. If you have any questions or would like to contribute please don't hesitate to reach out to us through the following discord channel. https://discord.com/channels/961771112864313344/1017088934238498837

What's Changed

Other contributions

New Contributors

v1.0.1: Deprecate package name (`gym_robotics`->`gymnasium_robotics`)

Released on 2022-10-03 - GitHub - PyPI

What's Changed

The PyPi package name for this repository will be changed in future releases and integration with Gymnasium. The new name will be gymnasium_robotics and installation will be done with pip install gymnasium_robotics instead of pip install gym_robotics.

The code for gym_robotics will be kept in the repository branch gym-robotics-legacy

Bug Fix

  • Remove the warning of duplicated registration of the environment MujocoHandBlockEnv @leonasting

v1.0.0: Update to Gym v0.26 and new mujoco bindings

Released on 2022-09-15 - GitHub - PyPI

This new release comes with the following changes:

  • Compatibility with gym v0.26. Previous gym versions won't be compatible with this release. @rodrigodelazcano
  • Added new environment versions that depend on the new mujoco python bindings. @rodrigodelazcano
  • Old environment versions that depend on mujoco_py are still kept but will be unmaintained moving forward. @rodrigodelazcano
  • New utility methods for GoalEnv class as suggested in #16 . compute_terminated and compute_truncated @rodrigodelazcano

The new versions of the environments that depend on mujoco bindings were validated with respect to the old versions of mujoco_py. The benchmark was performed using TQC + HER (sb3 implementation) with the same hyperparameters for both environment versions. The results can be seen here: https://wandb.ai/rodrigodelazcano/gym_robotics/reports/Benchmark-Gym-Robotics-SB3--VmlldzoyMjc3Mzkw

v0.1.0: Gym update

Released on 2022-02-25 - GitHub - PyPI

What's Changed

Installation Demo

pip install gym-robotics
pip install mujoco_py
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz
mkdir -p ~/.mujoco
tar -xzf mujoco210-linux-x86_64.tar.gz -C ~/.mujoco
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia

asciicast

Full Changelog: v0.0.2...v0.1.0

v0.0.2

Released on 2022-01-07 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: https://github.com/Farama-Foundation/gym-robotics/commits/v0.0.2