# SplattingAvatar
**Repository Path**: raywit_JR/SplattingAvatar
## Basic Information
- **Project Name**: SplattingAvatar
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-11-25
- **Last Updated**: 2024-11-25
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# [CVPR2024] SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting [](https://hits.seeyoufarm.com)
## [Paper Arxiv](https://arxiv.org/abs/2403.05087) | [Paper CVF](https://openaccess.thecvf.com/content/CVPR2024/html/Shao_SplattingAvatar_Realistic_Real-Time_Human_Avatars_with_Mesh-Embedded_Gaussian_Splatting_CVPR_2024_paper.html) | [Video Youtube](https://youtu.be/IzC-fLvdntA) | [Project Page](https://initialneil.github.io/SplattingAvatar)
Official Repository for CVPR 2024 paper [*SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting*](https://cvpr.thecvf.com/Conferences/2024/AcceptedPapers).
### Lifted optimization
The embedding points of 3DGS on the triangle mesh are updated by the *walking on triangle* scheme.
See the `phongsurface` module implemented in c++ and pybind11.
## Getting Started
- Create conda env with pytorch.
```
conda create -n splatting python=3.9
conda activate splatting
# pytorch 1.13.1+cu117 is tested
pip install torch==1.13.1 torchvision torchaudio functorch --extra-index-url https://download.pytorch.org/whl/cu117
# pytorch3d
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
pip install -e .
# install other dependencies
pip install tqdm omegaconf opencv-python libigl
pip install trimesh plyfile imageio chumpy lpips
pip install packaging pybind11
pip install numpy==1.23.1
```
- Clone this repo *recursively*. Install Gaussian Splatting's submodules.
```
git clone --recursive https://github.com/initialneil/SplattingAvatar.git
cd SplattingAvatar
cd submodules/diff-gaussian-rasterization
pip install .
cd ../submodules/simple-knn
pip install .
cd ..
```
- Install `simple_phongsurf` for *walking on triangles*.
```
cd model/simple_phongsurf
pip install -e .
```
- Download [FLAME model](https://flame.is.tue.mpg.de/download.php), choose **FLAME 2020** and unzip it, copy 'generic_model.pkl' into `./model/imavatar/FLAME2020`,
- Download [SMPL model](https://smpl.is.tue.mpg.de/download.php) (1.0.0 for Python 2.7 (10 shape PCs)) and move them to the corresponding places:
```
mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl model/smplx_utils/smplx_models/smpl/SMPL_FEMALE.pkl
mv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl model/smplx_utils/smplx_models/smpl/SMPL_MALE.pkl
```
## Preparing dataset
We provide the preprocessed data of the 10 subjects used in the paper.
- Our preprocessing followed [IMavatar](https://github.com/zhengyuf/IMavatar/tree/main/preprocess#preprocess) and replaced the *Segmentation* with [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting).
- ~~Pre-trained checkpoints are provided together with the data.~~
- Please find the data from https://github.com/Zielon/INSTA. We'll update the checkpoint link soon.
## Training
```
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir
# for example:
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir /path-to/bala
# you may specify gpu id by adding CUDA_VISIBLE_DEVICES=x before calling python:
CUDA_VISIBLE_DEVICES=0 python train_splatting_avatar.py ...
# to disable network_gui, set ip to 'none'
CUDA_VISIBLE_DEVICES=0 python train_splatting_avatar.py ... --ip none
# use SIBR_remoteGaussian_app.exe from 3DGS to watch the training
SIBR_remoteGaussian_app.exe --path
# is generated by running the original 3dgs on any original dataset
# SIBR_remoteGaussian_app.exe somehow requires a standard 3dgs output to start
# it is recommended to change "FPS" to "Trackball" in the viewer
# you don't need to change the "path" everytime
```
## Evaluation
```
python eval_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir
# for example:
python eval_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir /path-to/bala/output-splatting/last_checkpoint
```
## Full-body Avatar
We conducted experiments on [PeopleSnapshot](https://graphics.tu-bs.de/people-snapshot).
- Please download the parameter files (the same with InstantAvatar) from: [Baidu Disk](https://pan.baidu.com/s/1g4lSPAYfwbOadnnEDoWjzg?pwd=5gy5) or [Google Drive](https://drive.google.com/drive/folders/1r-fHq5Q_szFYD_Wz394Dnc5G79nG2WHw?usp=sharing).
- Download 4 sequences from PeopleSnapshot (male/female-3/4-casual) and unzip `images` and `masks` to corresponding folders from above.
- Use `scripts/preprocess_PeopleSnapshot.py` to preprocess the data.
- Training:
```
# override with instant_avatar.yaml for PeopleSnapshot in InstantAvatar's format
python train_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir
# for example:
python train_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual
# pretrained checkpoints provided in `output-splatting/last_checkpoint` can be evaluated by `eval_splatting_avatar.py`
# for example:
python eval_splatting_avatar.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual --pc_dir /path-to/female-3-casual/output-splatting/last_checkpoint/point_cloud/iteration_30000
# to animate to noval pose `aist_demo.npz`
python eval_animate.py --config "configs/splatting_avatar.yaml;configs/instant_avatar.yaml" --dat_dir /path-to/female-3-casual --pc_dir /path-to/female-3-casual/output-splatting/last_checkpoint/point_cloud/iteration_30000 --anim_fn /path-to/aist_demo.npz
```
## GPU requirement
We conducted our experiments on a single NVIDIA RTX 3090 with 24GB.
Training with less GPU memory can be achieved by setting a maximum number of Gaussians.
```
# in configs/splatting_avatar.yaml
model:
max_n_gauss: 300000 # or less as needed
```
or set by command line
```
python train_splatting_avatar.py --config configs/splatting_avatar.yaml --dat_dir model.max_n_gauss=300000
```
## Citation
If you find our code or paper useful, please cite as:
```
@inproceedings{shao2024splattingavatar,
title = {{SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting}},
author = {Shao, Zhijing and Wang, Zhaolong and Li, Zhuang and Wang, Duotun and Lin, Xiangru and Zhang, Yu and Fan, Mingming and Wang, Zeyu},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
```
## Acknowledgement
We thank the following authors for their excellent works!
- [instant-nsr-pl](https://github.com/bennyguo/instant-nsr-pl)
- [Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting)
- [IMavatar](https://github.com/zhengyuf/IMavatar)
- [INSTA](https://github.com/Zielon/INSTA)
## License
SplattingAvatar
The code is released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) for Noncommercial use only. Any commercial use should get formal permission first.
[Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/LICENSE.md)
**Inria** and **the Max Planck Institut for Informatik (MPII)** hold all the ownership rights on the *Software* named **gaussian-splatting**. The *Software* is in the process of being registered with the Agence pour la Protection des Programmes (APP).