Release Notes#
v1.2.4#
Released on 2023-12-24 - GitHub - PyPI
Release Notes:
A minor release bringing a bug fix and registering and pre-existing environment. This is the first release since @Kallinteris-Andreas became the project manager.
What's Changed
New Features:
Bug Fixes:
- randomize maze environments temporary
xmlfile name by @Kallinteris-Andreas in #185
Dependency Updates:
- limit
mujoco<3.0by @Kallinteris-Andreas in #187
Minor Changes:
- Update to
pyright==1.1.339by @Kallinteris-Andreas in #191 - Update
pyproject.tomlpython version to 3.8 by @Kallinteris-Andreas in #191
Documentation Updates:
- update observation space docstring: obs[6:8] is actually block - gripper for
fetchpick_and_placepushslideenvironments by @SethPate in #197 - remove py3.7 from installation.md documentation by @Kallinteris-Andreas in #199
Full Changelog: v1.2.3...v1.2.4
v1.2.3#
Released on 2023-09-18 - GitHub - PyPI
Gymnasium-Robotics v1.2.3 Release Notes:
Breaking changes:
- Drop support for Python 3.7 which has reached its end of life. (#159)
- New
v4version for theAntMazeenvironments that fix the following issue #155. (#156)
Bug Fixes:
- Allow to compute rewards from batched observations in maze environments (
PointMaze/AntMaze) (#153, #158) - Bump
AntMazeenvironments version tov4which fix issue #155. The following new files have been added to the source code:ant_maze_v4.pyandmaze_v4.py. (#156). The fixes involve:- When the environment is initialized with
continuing_task=True, the reward is now calculated before resetting the goal location. Previously the reward was always zero whether the ant reached the goal or not during the full episode. - Fix the ant agent being reset into a terminal state. The
maze_size_scalingfactor was missing in the distance check inMazeEnv.generate_reset_pos(). - Add
successitem toinforeturn.info["success"].
- When the environment is initialized with
- Fix
goal_cellandreset_cellassertions when reset maze environments (#164, #179) - Fix issue #166 in
FrankaKitchenenvironment.info["tasks_to_complete"]was not giving the correct values. (#169)
New Features
- Add
reset_targetboolean argument for initializing maze environments. Ifreset_target=Trueandcontinuing_task=True, the goal will be automatically placed in a new location when the agent reaches it in the same episode. Ifreset_target=Falseandcontinuing_task=True, the goal location won't be updated when reached by the agent and reward will be accumulated as long as the agent stays around the goal threshold. (#167, #170) - For maze environments, if the goal and reset cell locations are not given in the maze map structuree, they will be chosen automatically among empty cells. (#170)
Dependency Updates
- Remove restrictions on numpy version,
numpy>=1.21.0(#154) - Remove restrictions on mujoco version,
mujoco>=2.3.3(#171) - Restrict cython version to
cython<3due to the following issue Farama-Foundation/Gymnasium#616, (#162)
Documentation Updates
- Replace main logo svg format with png (#160)
- Update sphinx to latest version (#157)
- Add release notes changelog (#174)
- Remove versioning for included environments in documentation and update gifs for maze environments (#172, #177)
- Fix table format for Shadow Dexterous Hand - Reach environment (#178)
Full Changelog: v1.2.2...v1.2.3
v1.2.2#
Released on 2023-05-17 - GitHub - PyPI
Release Notes
This minor release updates MaMuJoCo to follow the latest PettingZoo version 1.23.0 and some minor bug fixes in the Github PyPI publish workflow.
New Features
- Update MaMujoco to PettingZoo
1.23.0by @Kallinteris-Andreas in #150
Bug Fix
- Include Franka mesh files in the
setuptools.package-dataparameter ofpyproject.tomlby @rodrigodelazcano 9db9196
Documentation Updates
MaMuJoCodocumentation, remove install from source instructions by @Kallinteris-Andreas in #151
Full Changelog: v1.2.1...v1.2.2
v1.2.1#
Released on 2023-05-16 - GitHub - PyPI
Gymnasium-Robotics 1.2.1 Release Notes:
This minor release adds new Multi-agent environments from the MaMuJoCo project. These environments have been updated to follow the PettingZoo API and use the latest mujoco bindings. In addition, the updates made for the first release of FrankaKitchen-v1 environment have been reverted in order for the environment to resemble more its original version in relay-policy-learning and D4RL. This will solve existing confusion with the action space (#135) and facilitate the re-creation of datasets in Minari.
We are also pining the mujoco version to v2.3.3 until we address the following issue (google-deepmind/mujoco#833).
Breaking Changes
- Revert
FrankaKitchen-v1environment to original. @rodrigodelazcano in #145. These changes involve:- robot model: use the Franka robot model of the original environment instead of the model provided in mujoco_menagerie
- action space: remove the Inverse Kinematic control option and maintain a single action space, the original joint velocity control.
- goal tasks: some task have been removed which were not present in the original environment (top_right_burner and bottom_right_burner). Also the tasks name now match the original naming.
New Features
- Add MaMuJoCo (Multi-agent mujoco) environments by @Kallinteris-Andreas in #53. Documentation has been also included at https://robotics.farama.org/envs/MaMuJoCo/. NOTE: we are currently in the process of validating this environments #141
- Initialize
PointMazeandAntMazeenvironments with random goal and reset position by default . @rodrigodelazcano in #110, #114 - Add
successkey to infos return dictionary in allMazeenvironments. @rodrigodelazcano in #110 - Recover the
set_env_state(state_dict={})method of the Adroit hand environments from https://github.com/vikashplus/mj_envs . The initial state of the simulation can also be set by passing the dictionary argumentinitial_state_dictwhen callingenv.reset(options={'initial_state_dict': Dict). @rodrigodelazcano in #119, @rodrigodelazcano in #115 - Resparsify adroit hand envs by @jjshoots in #111
Bug Fixes
- Add missing underscore to fix rendering by @frankroeder in #102
- Correct
point_obsslicing forachieved_goalinPointMazeenvironments by @dohmjan in #105 - Update the position of the goal every reset in
AntMazeenvironment by @nicehiro in #106 - Correct
FetchReachenvironment versioning fromv3tov2by @aalmuzairee in #121 - Fix issue #128 . Use
jnt_doafdrinstead ofnt_qposadrfor themujoco_utils.get_joint_qvel()utility function. by @rodrigodelazcano in #129 - Correct x, y scaling for Maze environments @rodrigodelazcano in #110
- Fix door state space key by @rodrigodelazcano in #130
- Make getter functions for qpos / qvel return copies by @hueds in #136
Minor Changes
- Enable
pyright.reportOptionalMemberAccessby @Kallinteris-Andreas in #93 - Add Farama Notifications by @jjshoots in #120
Documentation
- Fix observation space table in
FetchSlidedocs. @rodrigodelazcano in #109 - Update docs/README.md to link to a new CONTRIBUTING.md for docs by @mgoulao in #117
- Add docs versioning and release notes by @mgoulao in #124
- Fix missing edit button by @mgoulao in #138
- Add missing docs requirement by @mgoulao in #125
- Add sparse reward variant for
AdroitHandenvironments by @jjshoots in #123
Full Changelog: v1.2.0...v1.2.1
v1.2.0: Version 1.2.0#
Released on 2023-01-09 - GitHub - PyPI
Finally here! 🥳 🤖
Refactored versions of the D4RL MuJoCo environments are now available in Gymnasium-Robotics (PointMaze, AntMaze, AdroitHand, and FrankaKitchen). The configuration of these environments is not identically to the originals, please read the new details in the documentation webpage at https://robotics.farama.org/
Moving forward, we are recreating the offline datasets with Minari and evaluating the environments. If you have any questions or would like to contribute please don't hesitate to reach out to us through the following discord channel. https://discord.com/channels/961771112864313344/1017088934238498837
What's Changed
- Add three different refactored environment types from MuJoCo D4RL and update to the Gymnasium API standards version
0.27.0. Point Maze, Ant Maze, Adroit Hand, and FrankaKitchen @rodrigodelazcano - Add
sparsereward option to the Adroit Hand environments @jjshoots in #69 - Standardize file tree structure to facilitate automatic documentation generation. The environment file structure should look as follows:
gymnasium_robotics.envs.env_type.env_name:EnvClass. @rodrigodelazcano in #83 - Update to new Gymnasium
MujocoRendererclass Farama-Foundation/Gymnasium#112 for rendering @rodrigodelazcano - Add
pydocstyleto pre-commit @rodrigodelazcano
Other contributions
- Miscellaneous documentation webpage fixes @mgoulao, @SiddarGu, @jjshoots
- Pin numpy to
numpy<1.24.0due to #221 @rodrigodelazcano - Move dependency installs and setuptools to
pyproject.toml. Removerequirements.txtandtest_requirements.txt@jjshoots - Add google analytics to webpage @mgoulao
- Switch flake8 from gitlab to github @RedTachyon in #52
- Code/Documentation typo fixes @araffin , @Kallinteris-Andreas
New Contributors
- @mgoulao made their first contribution in #35
- @araffin made their first contribution in #39
- @RedTachyon made their first contribution in #52
- @Kallinteris-Andreas made their first contribution in #54
v1.0.1: Deprecate package name (`gym_robotics`->`gymnasium_robotics`)#
Released on 2022-10-03 - GitHub - PyPI
What's Changed
The PyPi package name for this repository will be changed in future releases and integration with Gymnasium. The new name will be gymnasium_robotics and installation will be done with pip install gymnasium_robotics instead of pip install gym_robotics.
The code for gym_robotics will be kept in the repository branch gym-robotics-legacy
Bug Fix
- Remove the warning of duplicated registration of the environment MujocoHandBlockEnv @leonasting
v1.0.0: Update to Gym v0.26 and new mujoco bindings#
Released on 2022-09-15 - GitHub - PyPI
This new release comes with the following changes:
- Compatibility with gym v0.26. Previous gym versions won't be compatible with this release. @rodrigodelazcano
- Added new environment versions that depend on the new mujoco python bindings. @rodrigodelazcano
- Old environment versions that depend on
mujoco_pyare still kept but will be unmaintained moving forward. @rodrigodelazcano - New utility methods for
GoalEnvclass as suggested in #16 .compute_terminatedandcompute_truncated@rodrigodelazcano
The new versions of the environments that depend on mujoco bindings were validated with respect to the old versions of mujoco_py. The benchmark was performed using TQC + HER (sb3 implementation) with the same hyperparameters for both environment versions. The results can be seen here: https://wandb.ai/rodrigodelazcano/gym_robotics/reports/Benchmark-Gym-Robotics-SB3--VmlldzoyMjc3Mzkw
v0.1.0: Gym update#
Released on 2022-02-25 - GitHub - PyPI
What's Changed
- Change workflow name by @vwxyzjn in #4
- Adopt
gym>=0.22reset signature by @vwxyzjn in #8 - Use
gym>=0.22as the core dependency by @vwxyzjn in #9
Installation Demo
pip install gym-robotics
pip install mujoco_py
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz
mkdir -p ~/.mujoco
tar -xzf mujoco210-linux-x86_64.tar.gz -C ~/.mujoco
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidiaFull Changelog: v0.0.2...v0.1.0
v0.0.2#
Released on 2022-01-07 - GitHub - PyPI
What's Changed
- Migrate robotics environments from OpenAI Gym by @seungjaeryanlee in #1
- Use Gym plugin system by @JesseFarebro in #2
- Setup github actions to publish on PyPi by @vwxyzjn in #3
New Contributors
- @seungjaeryanlee made their first contribution in #1
- @JesseFarebro made their first contribution in #2
- @vwxyzjn made their first contribution in #3
Full Changelog: https://github.com/Farama-Foundation/gym-robotics/commits/v0.0.2