# HUGSIM
**Repository Path**: tj1652045/HUGSIM
## Basic Information
- **Project Name**: HUGSIM
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2026-04-08
- **Last Updated**: 2026-04-08
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
> Hongyu Zhou
1, Longzhong Lin
1, Jiabao Wang
1, Yichong Lu
1, Dongfeng Bai
2, Bingbing Liu
2, Yue Wang
1, Andreas Geiger
3,4, Yiyi Liao
1,†
>
1 Zhejiang University
2 Huawei
3 University of Tübingen
4 Tübingen AI Center
>
† Corresponding Authors
This is the official project repository of the paper HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving.
---
# Installation
First, install [pixi](https://pixi.sh/latest/):
``` bash
curl -fsSL https://pixi.sh/install.sh | sh
```
As the repository depends on some packages that can only be installed from source code, and rely on pytorch and cuda to compile, the installation of pixi environment is seperated as **two steps**:
1. Comment the packages below `# install from source code` in `pixi.toml`, then run `pixi install` to install the packages from pypi.
2. Uncomment the packages in the previous step, then run `pixi install` to install these packages from source code.
3. Install apex (required by InverseForm) by running `pixi run install-apex`
Change into the **pixi environment** by using the command `pixi shell`.
Or you can use `pixi run ` to run a command in the **pixi environment**.
# Data Preparation
Please refer to [Data Preparation Document](data/README.md)
You can download sample data from [here](https://huggingface.co/datasets/hyzhou404/HUGSIM/tree/main/sample_data).
# Reconstruction
``` bash
seq=${seq_name}
input_path=${datadir}/${seq}
output_path=${modeldir}/${seq}
mkdir -p ${output_path}
CUDA_VISIBLE_DEVICES=4 \
python -u train_ground.py --data_cfg ./configs/${dataset_name: [kitti360, waymo, nusc, pandaset]}.yaml \
--source_path ${input_path} --model_path ${output_path}
CUDA_VISIBLE_DEVICES=4 \
python -u train.py --data_cfg ./configs/${dataset_name}.yaml \
--source_path ${input_path} --model_path ${output_path}
```
# Scene Export
The reconstructed scene folders contain some information that won't be utilized during the simulation. The scenes are expected to be exported as a minimized format to facilitate easier sharing and simulation.
```bash
python eval_render/export_scene.py --model_path ${recon_scene_path} --output_path ${export_path} --iteration 30000
```
We've made some changes in the capturing and reloading code. If you would like to convert scenes from previous version (before commit 1ca821a8) of our code, add `--ver0` in the above command.
# Vehicles, Scenes and Scenarios
We have released all 3DRealCar files, along with the complete set of scenes and scenarios, at [release link](https://huggingface.co/datasets/XDimLab/HUGSIM).
We are also holding a competition at [RealADSim @ ICCV 2025](https://huggingface.co/spaces/XDimLab/ICCV2025-RealADSim-ClosedLoop), so some scenarios and scenarios are hosted privately. We welcome participants to join!
# Scenarios configuration with GUI
**Note that this GUI is only used for configuration scenarios, rather than simulation. The rendering quality in GUI is not the results during simulation**
First convert the vehicles and scenes to splat and semantic format.
``` bash
python eval_render/convert_vehicles.py --vehicle_path ${PATH_3DRealCar}
python eval_render/convert_scene.py --model_path ${PATH_Scene}
```
Then, you can run the GUI to configure the scenario.
**nuscenes_camera.yaml** in gui/static/data provides a camera configuration template, you can modify it to fit your needs.
``` bash
cd gui
python app.py --scene ${PATH_Scene} --car_folder ${PATH_3DRealCar/converted}
```
You can configure the scenario with the GUI, and download the yaml file to use in simulation.
Here is a video for the GUI usage demonstration: [GUI Video](https://github.com/hyzhou404/HUGSIM/blob/main/assets/hugsim_gui.mp4)
# Simulation
**Before simulation, [UniAD_SIM](https://github.com/hyzhou404/UniAD_SIM), [VAD_SIM](https://github.com/hyzhou404/VAD_SIM) and [NAVSIM](https://github.com/hyzhou404/NAVSIM) client should be installed. The client environments are allowed to be separated from the HUGSIM environment.**
The dependencies for NAVSIM are already specified as the pixi environment file, so you don't need to manually install the dependencies.
In **closed_loop.py**, we automatically launch autonomous driving algorithms.
Paths in **configs/sim/\*\_base.yaml** should be updated as paths on your machine.
``` bash
CUDA_VISIBLE_DEVICES=${sim_cuda} \
python closed_loop.py --scenario_path ./configs/benchmark/${dataset_name}/${scenario_name}.yaml \
--base_path ./configs/sim/${dataset_name}_base.yaml \
--camera_path ./configs/sim/${dataset_name}_camera.yaml \
--kinematic_path ./configs/sim/kinematic.yaml \
--ad ${method_name: [uniad, vad, ltf]} \
--ad_cuda ${ad_cuda}
```
Run the following commands to execute.
```bash
sim_cuda=0
ad_cuda=1
# change this variable as the scenario path on your machine
scenario_dir=${SCENARIO_PATH}
for cfg in ${scenario_dir}/*.yaml; do
echo ${cfg}
CUDA_VISIBLE_DEVICES=${sim_cuda} \
python closed_loop.py --scenario_path ${cfg} \
--base_path ./configs/sim/nuscenes_base.yaml \
--camera_path ./configs/sim/nuscenes_camera.yaml \
--kinematic_path ./configs/sim/kinematic.yaml \
--ad uniad \
--ad_cuda ${ad_cuda}
done
```
In practice, you may encounter errors due to an incorrect environment, path, and etc. For debugging purposes, you can modify the last part of code as:
```python
# process = launch(ad_path, args.ad_cuda, output)
# try:
# create_gym_env(cfg, output)
# check_alive(process)
# except Exception as e:
# print(e)
# process.kill()
# For debug
create_gym_env(cfg, output)
```
# TODO list
- [x] Release sample data and results
- [x] Release unicycle model part
- [x] Release GUI
- [x] Release more scenarios
# Citation
If you find our paper and codes useful, please kindly cite us via:
```bibtex
@article{zhou2024hugsim,
title={HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving},
author={Zhou, Hongyu and Lin, Longzhong and Wang, Jiabao and Lu, Yichong and Bai, Dongfeng and Liu, Bingbing and Wang, Yue and Geiger, Andreas and Liao, Yiyi},
journal={arXiv preprint arXiv:2412.01718},
year={2024}
}
```