Space Robotics Bench

The Space Robotics Bench aims to be a comprehensive collection of environments and tasks for robotics research in the challenging domain of space. The benchmark covers a wide range of applications and scenarios while providing a unified framework for experimenting with new tasks. Although the primary focus is on the application of robot learning techniques, the benchmark is designed to be flexible and extensible to accommodate a variety of research directions.

This documentation is currently incomplete. Inactive pages found in the navigation panel indicate what topics will be covered prior to the first release. Please let us know by opening an issue if something is missing or about a specific topic that you are interested in having documented first. Thank you! :)

Key Features

On-Demand Procedural Generation with Blender

Blender is used to generate procedural assets across a wide range of scenarios to provide environments that are representative of the diversity in space. By doing so, this benchmark emphasizes the need for generalization and adapatibility of robots in space due to their safety-critical nature.

Highly-Parallelized Simulation with NVIDIA Isaac Sim

By leveraging the hardware-acceleration capabilities of NVIDIA Isaac Sim, all environments support parallel simulation instances, significantly accelerating workflows such as parameter tuning, verification, synthetic data generation, and online learning. The uniqueness of each procedurally generated instance also contributes towards the diversity that robots experience alongside the included domain randomization. Furthermore, compliance with Isaac Lab enhances compatibility with a wide array of pre-configured robots and sensors.

Compatibility with Gymnasium API

All tasks are registered with the standardized Gymnasium API, ensuring seamless integration with a broad ecosystem of libraries and tools. This enables developers to leverage popular reinforcement learning and imitation learning algorithms while also simplifying the evaluation and comparison of various solutions across diverse scenarios, giving rise to potential collaboration efforts.

Integration with ROS 2 & Space ROS

The benchmark can also be installed as a ROS 2 package to bring interoperability to its wide ecosystem, including aspects of Space ROS. This integration provides access to a rich set of tools and libraries that accelerate the development and deployment of robotic systems. At the same time, ROS developers get access to a set of reproducible space environments for evaluating their systems and algorithms while benefiting from the procedural variety and parallel instances via namespaced middleware communication.

Agnostic Interfaces

The interfaces of the benchmark are designed with abstraction layers to ensure flexibility for various applications and systems. By adjusting configuration and changing procedural pipelines, a single task definition can be reused across different robots and domains of space. Moreover, all assets are decoupled from the benchmark into a separate srb_assets repository, enabling their straightforward integration with external frameworks and projects.

Environments

This section provides an overview of the environments currently available in the Space Robotics Bench.

Before using these environments:

  1. Ensure you meet the system requirements
  2. Install the benchmark
  3. Learn the basic usage

Environments are separated into two categories:

  • Tasks that come with an objective for an agent to complete. Therefore, each environment instance provides a reward signal that guides the agent towards completing the goal.
  • Demos that provide a sandbox of the included capabilities without any specific objective. These environments can serve in the initial task design while designing a new scenario or defining the action and observation spaces.

The environments are grouped based on the robot type:

Mobile Robot Environments

Mobile environments are currently limited to simple demos for wheeled and aerial robots. Future plans include integrating spacecrafts in orbital scenarios and defining a set of tasks for each robot type.

Wheeled Robot Environments

Tasks

No tasks are implemented at the moment.

Demos

Perseverance

.docker/run.bash -e SRB_SCENARIO=mars scripts/teleop.py --env perseverance

Aerial Robot Environments

Tasks

No tasks are implemented at the moment.

Demos

Ingenuity

.docker/run.bash -e SRB_SCENARIO=mars scripts/teleop.py --env ingenuity

Robotic Manipulation Environments

Tasks

Sample Collection

Scenario: Moon, Objects: Procedural

.docker/run.bash -e SRB_SCENARIO=moon -e SRB_ASSETS_OBJECT_VARIANT=procedural scripts/teleop.py --env sample_collection

Scenario: Mars, Objects: Dataset

.docker/run.bash -e SRB_SCENARIO=mars -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env sample_collection

Other Examples

# Scenario: Moon, Objects: Primitive
.docker/run.bash -e SRB_SCENARIO=moon -e SRB_ASSETS_OBJECT_VARIANT=primitive scripts/teleop.py --env sample_collection

# Scenario: Mars, Objects: Procedural
.docker/run.bash -e SRB_SCENARIO=mars -e SRB_ASSETS_OBJECT_VARIANT=procedural scripts/teleop.py --env sample_collection

# Scenario: Moon, Objects: Multi + Dataset
.docker/run.bash -e SRB_SCENARIO=orbit -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env sample_collection_multi

Debris Capture

Scenario: Orbit, Objects: Dataset

.docker/run.bash -e SRB_SCENARIO=orbit -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env debris_capture

Other Examples

# Scenario: Orbit, Objects: Procedural
.docker/run.bash -e SRB_SCENARIO=orbit -e SRB_ASSETS_OBJECT_VARIANT=procedural scripts/teleop.py --env debris_capture

Peg-in-Hole

Scenario: Moon, Objects: Dataset

.docker/run.bash -e SRB_SCENARIO=moon -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env peg_in_hole

Scenario: Orbit, Objects: Dataset

.docker/run.bash -e SRB_SCENARIO=orbit -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env peg_in_hole

Other Examples

# Scenario: Moon, Objects: Prodecural
.docker/run.bash -e SRB_SCENARIO=moon -e SRB_ASSETS_OBJECT_VARIANT=procedural scripts/teleop.py --env peg_in_hole

# Scenario: Mars, Objects: Dataset
.docker/run.bash -e SRB_SCENARIO=mars -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env peg_in_hole

# Scenario: Moon, Objects: Multi + Dataset
.docker/run.bash -e SRB_SCENARIO=mars -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env peg_in_hole_multi

# Scenario: Mars, Objects: Multi + Procedural
.docker/run.bash -e SRB_SCENARIO=mars -e SRB_ASSETS_OBJECT_VARIANT=procedural scripts/teleop.py --env peg_in_hole_multi

# Scenario: Orbit, Objects: Multi + Dataset
.docker/run.bash -e SRB_SCENARIO=orbit -e SRB_ASSETS_OBJECT_VARIANT=dataset scripts/teleop.py --env peg_in_hole_multi

Solar Panel Assembly

Scenario: Moon

.docker/run.bash -e SRB_SCENARIO=moon scripts/teleop.py --env solar_panel_assembly

Other Examples

# Scenario: Mars
.docker/run.bash -e SRB_SCENARIO=mars scripts/teleop.py --env solar_panel_assembly

# Scenario: Orbit
.docker/run.bash -e SRB_SCENARIO=orbit scripts/teleop.py --env solar_panel_assembly

Demos

Gateway

.docker/run.bash -e SRB_SCENARIO=orbit scripts/teleop.py --env gateway

Integrations

The Space Robotics Bench features a number of integrations that simplify common workflows. This section provides an overview of the design with further references for the specific instructions.

Integration with ROS 2

Take a look at ROS 2 Workflow if you are directly interested in instructions with concrete examples.

Motivation

ROS has become the de facto standard for developing robotic systems across various environments, including outer space, with the advent of ROS 2. The Space Robotics Bench integrates seamlessly with the ROS 2 ecosystem, facilitating the exposure of relevant simulation data to ROS nodes. This integration aims to accelerate the iterative development and testing of space robotic systems.

Approach

Isaac Sim’s computational graph is primarily offloaded to the system’s dedicated NVIDIA GPU, which presents challenges in directly exposing all internal states to the ROS middleware without compromising performance. Instead, the package focuses on exposing the inputs and outputs of each registered Gymnasium environment alongside a fixed global mapping configuration to maintain modularity and flexibility within the simulation architecture.

Workflow

The Space Robotics Bench provides a ros2.py script that spawns a ROS node to interface with the environments. Subscribers, publishers, and services are dynamically created based on the selected environment and global mapping configuration. When running multiple environment instances in parallel, the script automatically assigns different namespaces to inputs and outputs, preventing conflicts. The script also includes additional functionalities, such as simulation reset capabilities.

System Requirements

This project requires a dedicated NVIDIA GPU with RT Cores (RTX series). Isaac Sim does not support GPUs from other vendors.

Hardware Requirements

The hardware requirements for running this simulation are inherited from the Isaac Sim requirements. While it is possible to run the simulation on lower-spec systems than those recommended, performance will be significantly reduced.

ComponentMinimum Requirement
Architecturex86_64
CPUAny smart silicon-rich rock
RAM16 GB
GPUNVIDIA GPU with RT Cores (RTX series)
VRAM4 GB
Disk Space30 GB
Network12 GB (for pulling Docker images)

Software Requirements

The following software requirements are essential for running the simulation. Other operating systems may work, but significant adjustments may be required.

ComponentRequirement
OSLinux-based distribution (e.g., Ubuntu 22.04/24.04)
NVIDIA Driver535.183.01 (tested; other versions may work)

Additional Docker Requirements

The current setup requires the X11 window manager to enable running GUI from within the Docker container. Other window managers may work, but significant adjustments may be required.

ComponentRequirement
Window ManagerX11

Installation

Currently, only Docker-based setup is fully supported and tested.

Local installation should be as straightforward as following the Dockerfile instructions. However, it is yet to be explored!

Installation (Docker)

This section provides instructions for running the simulation within a Docker container. Before proceeding, ensure that your system meets the system requirements. If you are using a different operating system, you may need to adjust the following steps accordingly or refer to the official documentation for each step.

1. Install Docker Engine

First, install Docker Engine by following the official installation instructions. For example:

curl -fsSL https://get.docker.com | sh
sudo systemctl enable --now docker

sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker

2. Install NVIDIA Container Toolkit

Next, install the NVIDIA Container Toolkit, which is required to enable GPU support for Docker containers. Follow the official installation guide or use the following commands:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

3. Gain Access to the Isaac Sim Docker Image

To run the simulation, you need access to the Isaac Sim Docker image, which requires registering an account and generating an API key from NVIDIA GPU Cloud (NGC).

3.1 Register and Log In to NVIDIA GPU Cloud (NGC)

Visit the NGC portal and register or log in to your account.

3.2 Generate Your NGC API Key

Follow the official guide to generate your personal NGC API key.

3.3 Log In to NGC via Docker

Once you have your API key, log in to NGC through Docker:

docker login nvcr.io

When prompted for a username, enter $oauthtoken (exactly as shown):

Username: $oauthtoken

When prompted for a password, use the API key you just generated:

Password: <NGC API Key>

4. Clone the Repository

Next, clone the space_robotics_bench repository locally. Make sure to include the --recurse-submodules flag to clone also the submodule containing simulation assets.

git clone --recurse-submodules https://github.com/AndrejOrsula/space_robotics_bench.git

5. Build the Docker Image

Now, you can build the Docker image for space_robotics_bench by running the provided .docker/build.bash script. Note that the first build process may take up to 30 minutes (depending on your network speed and system configuration).

space_robotics_bench/.docker/build.bash

6. Verify the Image Build

To ensure that the image was built successfully, run the following command. You should see the space_robotics_bench image listed among recently created Docker images.

docker images

Basic Usage

After successful installation, you are ready to use the Space Robotics Bench. This page will guide you through controlling robots in various scenarios using a simple teleoperation.

When using the Docker setup, it is strongly recommended that you always use the provided .docker/run.bash script. It configures the environment automatically and mounts caching volumes. You can optionally provide a command that will be executed immediately inside the container. If no command is specified, you will be dropped into an interactive shell. Throughout this documentation, if you omit the .docker/run.bash prefix, it assumes that you are already inside the Docker container, or you are using a local installation.
# cd space_robotics_bench
.docker/run.bash ${OPTIONAL_CMD}

Verify the Functionality of Isaac Sim

Let's start by verifying that Isaac Sim is functioning correctly:

The first time Isaac Sim starts, it may take a few minutes to compile shaders. However, subsequent runs will use cached artefacts, which significantly speed up the startup.
# Single quotes are required for the tilde (~) to expand correctly inside the container.
.docker/run.bash '~/isaac-sim/isaac-sim.sh'

If any issues arise, consult the Troubleshooting section or the official Isaac Sim documentation, as this issue is likely unrelated to this project.

Journey into the Unknown

Once Isaac Sim is confirmed to be working, you can begin exploring the demos and tasks included with the environments. Let's start with a simple teleoperation example with the teleop.py script:

# Option 1: Using the script directly
.docker/run.bash scripts/teleop.py --env perseverance
# Option 2: Using ROS 2 package installation
.docker/run.bash ros2 run space_robotics_bench teleop.py --env perseverance

After a few moments, Isaac Sim should appear. The window will briefly remain inactive as the assets are procedurally generated in the background. The generation time depends on the complexity of the assets and your hardware, particularly the GPU, which will be used to bake PBR textures. However, future runs will use cached assets, as long as the configuration remains unchanged and the cache is not cleared, see Clean the Assets Cache.

Eventually, you will be greeted by the Mars Perseverance Rover on a procedurally generated Martian landscape.

At the same time, the terminal will display the following keyboard scheme:

+------------------------------------------------+
|  Keyboard Scheme (focus the Isaac Sim window)  |
+------------------------------------------------+
+------------------------------------------------+
| Reset: [ L ]                                   |
+------------------------------------------------+
| Planar Motion                                  |
|                     [ W ] (+X)                 |
|                       ↑                        |
|                       |                        |
|          (-Y) [ A ] ← + → [ D ] (+Y)           |
|                       |                        |
|                       ↓                        |
|                     [ S ] (-X)                 |
+------------------------------------------------+

While the Isaac Sim window is in focus, you can control the rover using the W, A, S, and D keys for motion. Use your mouse to navigate the camera. If the rover gets stuck, pressing L will reset its position.

To close the demo, press Ctrl+C in the terminal. This will gracefully shut down the demo, close Isaac Sim, and return you to your host environment.

Blurry Textures?

By default, the textures in the environment might appear blurry due to the configuration setting the baked texture resolution to 50.0% (default=0.5). This setting allows procedural generation to be faster on low-end hardware. If your hardware is capable, you can increase the resolution by adjusting the detail parameter, see Benchmark Configuration:

.docker/run.bash -e SRB_DETAIL=1.0 scripts/teleop.py --env perseverance

Explore Unknown Domains

You can explore other environments by using the --env, --task, or --demo arguments interchangeably. A full list of available environments is documented in the Environment Overview, or you can conveniently list them using this command:

.docker/run.bash scripts/list_envs.py

Use this example as a gateway into exploring further on your own:

.docker/run.bash scripts/teleop.py --env perseverance

Instructions for the Benchmark

This section covers instructions for specific aspects of the Space Robotics Bench.

Simulating Parallel Environments

The --num_envs Argument

All Python entrypoint scripts, e.g. teleop.py, random_agent.py and ros2.py, accept an optional --num_envs argument. By default, this is set to 1, but you can specify more environments for parallel execution. For example, to run four environments, use the following command:

.docker/run.bash scripts/teleop.py --task sample_collection --num_envs 4

Each environment will generate its own procedural assets, providing unique experiences across different simulations. However, note that the time taken to generate these assets scales linearly with the number of environments. These assets will be cached for future runs unless the cache is cleared (explained later in this document).

After the environments are initialized, they can be controlled in sync using the same keyboard scheme displayed in the terminal.

Random and Zero Agents

Instead of manually controlling each environment via teleop.py, you can use random and zero agents to test and debug certain functionalities.

Random Agent

The random_agent.py script allows environments to act based on random actions sampled from the action space. This is particularly useful for verifying if environments are running as intended without manual control:

.docker/run.bash scripts/random_agent.py --task sample_collection --num_envs 4

Zero Agent

Alternatively, zero_agent.py executes environments where all actions are zero-valued, mimicking a steady-state system. This can be useful for analyzing the idle behaviour of environments:

.docker/run.bash scripts/zero_agent.py --task sample_collection --num_envs 4

Enabling Visual Observations

The *_visual Environment Variant

All environments have a *_visual variant that differs in the enabled sensors, namely cameras that provide visual observations at the cost of increased computational requirements.

.docker/run.bash scripts/teleop.py --task sample_collection_visual

Graphical User Interface (GUI)

The Space Robotics Bench comes with a simple GUI application that can serve as a more approachable demonstration of its capabilities than pure CLI. The GUI is built on top of egui and leverages r2r ROS 2 Rust bindings to communicate with the rest of the benchmark. The initial screen of the GUI is shown below.

Usage

To run the GUI application, you can use the included gui.bash script, which internally calls a variant of cargo run -p space_robotics_bench_gui command.

.docker/run.bash scripts/gui.bash

The GUI runs the teleop.py script for the selected environments, but the idea is to eventually support multiple workflows. Nine pre-configured tasks/demos are available in the Quick Start window, and a specific scenario can also be defined through the advanced configuration.

Benchmark Configuration

Environment Configuration

The environments can be configured in two ways:

  1. Modifying the env.yaml file.
  2. Using environment variables.

The default configuration file contains various settings that control the seed, scenario, level of detail, and options for assets (robot, object, terrain, vehicle).

seed: 42 # SRB_SEED [int]
scenario: mars # SRB_SCENARIO [mars, moon, orbit]
detail: 0.5 # SRB_DETAIL [float]
assets:
  robot:
    variant: dataset # SRB_ASSETS_ROBOT_VARIANT [dataset]
  object:
    variant: procedural # SRB_ASSETS_OBJECT_VARIANT [primitive, dataset, procedural]
  terrain:
    variant: procedural # SRB_ASSETS_TERRAIN_VARIANT [none, primitive, dataset, procedural]
  vehicle:
    variant: dataset # SRB_ASSETS_VEHICLE_VARIANT [none, dataset]

Setting Configuration via Environment Variables

Values from the configuration file can be overridden using environment variables. Furthermore, you can directly pass them into the .docker/run.bash script. For instance:

.docker/run.bash -e SRB_DETAIL=1.0 -e SRB_SCENARIO=moon ...

CLI Arguments

The following arguments are common across all entrypoint scripts, e.g. teleop.py, random_agent.py and ros2.py:

  • -h, --help: Display the help message and exit.
  • --task TASK, --demo TASK, --env TASK: Specify the name of the task or environment. You can list available tasks using list_envs.py.
  • --num_envs NUM_ENVS: Number of parallel environments to simulate.
  • --disable_ui: Disable the majority of the Isaac Sim UI.
  • --headless: Force the display to remain off, making the simulation headless.
  • --device DEVICE: Set the device for simulation (e.g., "cpu", "cuda", or "cuda:N" where N is the device ID).

Additional Environment Variables

  • SRB_SKIP_REGISTRATION (default: false): When set to "true"|1, automatic registering of environments with the Gymnasium registry is disabled. This can be useful in specific deployment or testing scenarios.
  • SRB_SKIP_EXT_MOD_UPDATE (default: false): When set to "true"|1, the Rust extension module will not be automatically recompiled on startup of Python entrypoint scripts. By default, this ensures that the extension module is always up-to-date with the source code. Skipping this step can be useful when the extension module never changes to reduce startup time slightly.
  • SRB_WITH_TRACEBACK (default: false): When set to "true"|1, rich traceback information is displayed for exceptions. This can be useful for debugging.
    • SRB_WITH_TRACEBACK_LOCALS (default: false): When set to "true"|1 and SRB_WITH_TRACEBACK is enabled, local variables are included in the traceback information. This can be useful for debugging, but it can also be overwhelming in some cases.

Instructions for Integrated Workflows

This section covers instructions for common workflows that are integrated directly into the Space Robotics Bench.

ROS 2 Workflow

Environments of the Space Robotics Bench can be integrated with ROS 2 to enable control of the robots and data collection over the middleware communication.

Single Environment

The ros2.py script is the primary entry point for interfacing with the environments through ROS 2. This script spawns a single ROS node that maps inputs and outputs for the environment and provides miscellaneous functionalities such as resetting the simulation. Here is an example using the Ingenuity demo:

.docker/run.bash ros2 run space_robotics_bench ros2.py --env ingenuity

Once the environment is initialized, open a new terminal to inspect the available ROS topics. You can either use your ROS setup or join the running Docker container with the .docker/join.bash script:

.docker/join.bash

Now, list the available ROS topics:

ros2 topic list
# Expected output:
# /clock
# /env/info
# /env/reward
# /env/terminated
# /env/truncated
# /parameter_events
# /robot/cmd_vel
# /rosout
# /tf

To control the robot, publish a Twist message to the /robot/cmd_vel topic:

ros2 topic pub --once /robot/cmd_vel geometry_msgs/msg/Twist '{linear: {x: 1.0}}'

You can reset the simulation by calling the /sim/reset service:

ros2 service call /sim/reset std_srvs/srv/Empty

Parallel Environments

You can run multiple environments in parallel by using the --num_envs argument. Each environment will map to its own ROS namespace. For example, try running the Ingenuity demo with 4 environments:

.docker/run.bash ros2 run space_robotics_bench ros2.py --env ingenuity --num_envs 4

List the available ROS topics again:

ros2 topic list
# Expected output:
# /clock
# /env0/reward
# /env0/robot/cmd_vel
# /env0/terminated
# /env0/truncated
# /env1/reward
# /env1/robot/cmd_vel
# /env1/terminated
# /env1/truncated
# /env2/reward
# /env2/robot/cmd_vel
# /env2/terminated
# /env2/truncated
# /env3/reward
# /env3/robot/cmd_vel
# /env3/terminated
# /env3/truncated
# /envs/info
# /envs/robot/cmd_vel
# /parameter_events
# /rosout
# /tf

Each environment has its own namespace, allowing individual control. For example:

ros2 topic pub --once /env0/robot/cmd_vel geometry_msgs/msg/Twist '{linear: {x: -1.0}}'
ros2 topic pub --once /env1/robot/cmd_vel geometry_msgs/msg/Twist '{linear: {x: 1.0}}'
ros2 topic pub --once /env2/robot/cmd_vel geometry_msgs/msg/Twist '{linear: {y: -1.0}}'
ros2 topic pub --once /env3/robot/cmd_vel geometry_msgs/msg/Twist '{linear: {y: 1.0}}'

Launch with rviz2 and teleop_twist_keyboard

For convenience, you can launch rviz2 alongside ros2.py and teleop_twist_keyboard for visualization and control via keyboard:

.docker/run.bash ros2 launch space_robotics_bench demo.launch.py task:=ingenuity_visual num_envs:=4

Utilities

This section covers instructions for certain utilities of the Space Robotics Bench.

Clean the Assets Cache

After running several demos or simulations, the procedurally generated assets (such as textures and meshes) can accumulate in the cache. To free up disk space, you can clean this cache:

.docker/run.bash scripts/utils/clean_procgen_cache.py

Development Environment

Whether you are contributing to the benchmark or using the setup in your own projects, this section aims to improve your development experience.

Development inside Docker

The Space Robotics Bench supports a Docker setup, which in itself provides an isolated development environment. By default, the .docker/run.bash script already mounts the source code into the container (can be disabled with WITH_DEV_VOLUME=false). In itself, this already makes the standalone Docker setup quite convenient for development.

Joint a Running Container

Once the Docker container is running, you can join the running Docker container with the .docker/join.bash script:

.docker/join.bash

Dev Container

Dev Containers allow for a fully isolated development environment tailored to specific project needs. This is particularly useful for ensuring all dependencies are installed and consistent across different development machines.

Open the Benchmark in a Dev Container

To simplify the process of building and opening the repository as a Dev Container in Visual Studio Code (VS Code), you can run the .devcontainer/open.bash that automates the process.

.devcontainer/open.bash

Modify the Dev Container

You can also customize the included .devcontainer/devcontainer.json configuration to suit your specific development requirements.

New Assets

All assets used by the Space Robotics Bench are separated into the srb_assets repository to encourage their reuse. Both static assets (datasets) and procedural pipelines in Blender are supported.

Procedural Assets with Blender

Motivation

Procedural generation is a powerful technique for creating diverse and realistic environments without relying on static, disk-consuming datasets. This approach allows for the generation of an infinite number of unique environments, a feature that has been underutilized in the fields of robotics and space exploration. The Space Robotics Bench seeks to address this gap by offering a versatile framework for procedurally generating 3D assets, which can be combined to create complex environments suitable for the development, training, and validation of space robotic systems.

Approach

The package utilizes Blender to procedurally generate both the geometry and materials (PBR textures) of 3D assets.

Geometry generation is achieved using Blender's Geometry Nodes, a robust node-based system that allows for the creation, manipulation, and modification of arbitrary geometry and data types. First introduced in Blender 2.92 (2021), Geometry Nodes have evolved significantly, supporting the creation of intricate geometries through a series of interconnected node trees. Each node system can consist of multiple node trees that handle different aspects of the geometry. By applying randomness and variation within these node trees, a wide range of unique assets can be produced simply by adjusting the seed value.

Blender's Shader Nodes, which have a longer history, are used to define the appearance of objects through material properties. Like Geometry Nodes, Shader Nodes are also node-based and allow for the creation of complex materials. Blender provides several procedural textures and maps (e.g., Perlin noise, Voronoi, Wave), which can be adjusted and combined to form more sophisticated materials. By integrating randomness into the shader nodes, each procedurally generated asset can have a unique appearance, even with the same underlying geometry.

Workflow

The srb_assets repository includes a procgen_assets.py script that automates the entire procedural generation process, including node construction, modifier application, seeding, texture baking, and model export. This script is fully standalone and interacts with Blender's Python API (bpy) through its binary executable. Although Blender can be used as a Python module via bpy, it is often linked to a specific Python version and has longer release cycles. The standalone script offers more flexibility, allowing it to be used with any Blender version.

Node trees can be generated from Python source files provided as input to the script. The Node To Python addon simplifies the creation of such source code. This addon enables users to design node trees in Blender's graphical interface and convert them into Python code that can be integrated into the procgen_assets.py script. This method allows users to prototype assets interactively within Blender's GUI and then export them into code.

New Environments

The process of introducing a new environment into the Space Robotics Bench is intended to be modular.

1. Duplicate an Existing Environment

Navigate to the tasks directory, which houses the existing environments. Then, duplicate one of the existing demos or task directories that resembles your desired task/demo more and rename it to the name of your new environment.

2. Modify the Environment Configuration

Customize your new environment by altering the configuration files and task implementation code within the folder. This may include asset selection, interaction rules, or specific environmental dynamics.

3. Automatic Registration

The new environment will be automatically registered with the Gymnasium API. The environment will be registered under the directory name you assigned during the duplication process.

4. Running Your New Environment

Test your new environment by specifying the name of your new environment via the --env/--task/--demo argument.

Troubleshooting

Runtime Errors

Driver Incompatibility

If you encounter the following error message:

[Error] [carb.graphics-vulkan.plugin] VkResult: ERROR_INCOMPATIBLE_DRIVER

This indicates that your NVIDIA driver is incompatible with Omniverse. To resolve the issue, update your NVIDIA driver according to the Isaac Sim driver requirements.

Unexpected Behavior

Teleoperation Stuck

During teleoperation, if you change your window focus, Omniverse may fail to register a button release, causing the robot to continuously move in one direction. To fix this, press the L key to reset the environment.

Attributions

All modifications to the listed assets, unless stated otherwise, involve non-destructive transformations, mesh simplification, conversion to the Universal Scene Description (USD) format, rigging, and application of USD APIs for integration purposes.

  1. Mars Perseverance Rover, 3D Model by NASA.
  2. Mars Ingenuity Helicopter, 3D Model by NASA.
  3. Mars 2020 Sample Tube, 3D Model by NASA. This mesh was modified to include a cap, and additional materials were added.
  4. Gateway Core, 3D Model by NASA. The model was separated into individual assets: Canadarm3 (small and large) and the Gateway Core itself.
  5. Low Lunar Orbit, HDR Image by NASA. This image was rotated by 90 degrees to better align with the implemented environment.
  6. Lunar Rover from the Movie Moon, 3D Model by Watndit, licensed under Creative Commons Attribution. The original model was heavily remodelled and retextured, with significant parts of the geometry removed. The wheels were replaced with a model from NASA's Curiosity Rover, 3D Model.

Contributors