Setup
There are two setup approaches: docker-based and conda-based. We recommend the docker-based approach as it wraps everything up and is friendly to users.
Docker-based
Prerequisite. We recommend the OS version Ubuntu 18.04/20.04 and NVIDIA driver version 525.60.11. Other driver versions are probably compatible (e.g., 470.141.03, 535.113.01) but not guaranteed.
Generate NVIDIA NGC API Key
Log in NVIDIA NGC. If you do not have an account, register one and log in.
Generate your NGC API key. You can refer to Generating API key.
Log into the NGC account on the instance
docker login nvcr.io
Type
$oauthtoken
forUsername
. Then paste your API key forPassword
. You should seeLogin Succeeded
.Make sure NVIDIA container is properly installed. Check Installation guide.
Build the docker image inside the workspace.
git clone https://github.com/arnold-benchmark/arnold.git cd arnold/workspace docker build -f Dockerfile -t "arnold" .
Build vagrant in the workspace. This might take a long time if you are using the image from the dockerHub instead of building it locally. You can use a system monitor to stay checking the process.
It is also possible that
Vagrantfile
contains wrong paths for yournvidia_icd.json
andnvidia_layers.json
. Make sure they are not empty. For example, you should check the two paths:/etc/vulkan/icd.d/nvidia_icd.json
and/usr/share/vulkan/icd.d/nvidia_icd.json
, one of which would always exist.# if jsons exist in /etc (default) '-v', '/etc/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json', '-v', '/etc/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json', # if jsons exist in /usr, modify the two lines in Vagrantfile '-v', '/usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json', '-v', '/usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json',
Check the above paths according to your system. After that, build vagrant.
vagrant up
After
vagrant up
finishes, runvagrant ssh
and you are ready to go. Enjoy the full GUI experiment with docker and Isaac Sim. The docker environment provides a wide range of development tools as well. For more details check readme. Notably, you need to use/isaac-sim/python.sh
to runpython
.
Conda-based
As a backup solution, we also introduce a conda-based setup.
Manually download NVIDIA Omniverse and install.
Open the
NVIDIA Omniverse
platform and installIsaac Sim 2022.1.1
(other versions are not guaranteed to work) inLibrary
.Clone the code repo.
git clone git@github.com:arnold-benchmark/arnold.git cd arnold
Create a conda environment.
conda env create -f conda_env.yaml conda activate arnold
Install
clip
:pip install git+https://github.com/openai/CLIP.git
Install point cloud engine:
cd utils python setup.py build_ext --inplace cd ..
Link the libraries and toolkits of
Isaac Sim
:source ${Isaac_Sim_Root}/setup_conda_env.sh # e.g., source ~/.local/share/ov/pkg/isaac_sim-2022.1.1/setup_conda_env.sh
You are ready to run scripts. In the activated conda environment, you can directly use
python
, in contrast to/isaac-sim/python.sh
in docker.
Common
Data, assets, and checkpoints are common items for both setup approaches.
Download data and assets. If you are docker-based, put them in the
vagrant
workspace, e.g.,workspace/data/pour_water
,workspace/materials
,workspace/sample
. If you are conda-based, make sure thematerials
andsample
are in the same folder. After preparation, check thedata_root
andasset_root
inconfigs/default.yaml
to ensure valid links. For example,data_root/pour_water
,asset_root/materials
andasset_root/sample
are valid.(Optional) You can download pre-trained model checkpoints from here. Considering the performance, we provide two checkpoints of multi-task PerAct, with and without an additional state head, respectively. To evaluate the checkpoints, you need to put them in the directory
${output_root}/${task}/train_${model}_${lang_encoder}_${state_head}
. For example, the checkpointperact_multi_clip_best.pth
should be put in${output_root}/multi/train_peract_clip_0
. For the model with an additional state head, the last number (state_head
) should be1
.
Quickstart
Sanity Check
After setup, you can run a toy example to check if Isaac Sim is working:
For docker-based setup:
cd workspace vagrant ssh /isaac-sim/python.sh /isaac-sim/standalone_examples/api/omni.isaac.franka/pick_place.py
For conda-based setup:
conda activate arnold source ${Isaac_Sim_Root}/setup_conda_env.sh # e.g., source ~/.local/share/ov/pkg/isaac_sim-2022.1.1/setup_conda_env.sh python ${Isaac_Sim_Root}/standalone_examples/api/omni.isaac.franka/pick_place.py # e.g., python ~/.local/share/ov/pkg/isaac_sim-2022.1.1/standalone_examples/api/omni.isaac.franka/pick_place.py
It may be slow for the first time to launch Isaac Sim because of the shader compilation.
Visualization
You can replay the demonstrations and visualize them by running:
# docker-based
/isaac-sim/python.sh eval.py task=${TASK_NAME} mode=eval use_gt=[1,1] visualize=1
# conda-based
python eval.py task=${TASK_NAME} mode=eval use_gt=[1,1] visualize=1
With use_gt=[1,1]
, the running does not require a pre-trained model checkpoint.
Training
For example, train a single-task PerAct on
PickupObject
:# docker-based /isaac-sim/python.sh train_peract.py task=pickup_object model=peract lang_encoder=clip mode=train batch_size=8 steps=100000 # conda-based python train_peract.py task=pickup_object model=peract lang_encoder=clip mode=train batch_size=8 steps=100000
Train a multi-task PerAct:
# docker-based /isaac-sim/python.sh train_peract.py task=multi model=peract lang_encoder=clip mode=train batch_size=8 steps=200000 # conda-based python train_peract.py task=multi model=peract lang_encoder=clip mode=train batch_size=8 steps=200000
For more details, see Train.
Evaluation
Here we only show examples of conda-based commands. Docker-based commands substitute python
with /isaac-sim/python.sh
.
# checkpoint selection
python ckpt_selection.py task=${TASK_NAME} model=peract lang_encoder=clip mode=eval visualize=0
# evaluation
python eval.py task=${TASK_NAME} model=peract lang_encoder=clip mode=eval visualize=0
For more details, see Eval.