Git Product home page Git Product logo

humangaussian's People

Contributors

alvinliu0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

humangaussian's Issues

cuda issues

When running the script, the input is on cuda, but the parameter is on cpu. How to solve this problem

SyntaxError: invalid syntax,but there seems to be no problem with the code

When I run the command
python launch.py --config configs/test.yaml --train --gpu 0 system.prompt_processor.prompt="A boy with a beanie wearing a hoodie and joggers"
The following error is reported:
File "launch.py", line 38
record.levelname = f"{color_start}[{record.levelname}]"
^
SyntaxError: invalid syntax

I don't think there's anything wrong with that.I don't know where the problem lies.
Among them, I installed xformers through github wheel.

My environment is as follows:

Name Version Build Channel

_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 2.0.0 pypi_0 pypi
accelerate 0.25.0 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohttp 3.9.1 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
altair 5.2.0 pypi_0 pypi
annotated-types 0.6.0 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
anyio 3.7.1 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
bitsandbytes 0.38.1 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.08.22 h06a4308_0
cachetools 5.3.2 pypi_0 pypi
certifi 2022.12.7 pypi_0 pypi
chardet 5.2.0 pypi_0 pypi
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.7 pypi_0 pypi
clip 1.0 pypi_0 pypi
cmake 3.25.0 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
colorlog 6.8.0 pypi_0 pypi
contourpy 1.2.0 pypi_0 pypi
controlnet-aux 0.0.7 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
diff-gaussian-rasterization 0.0.0 pypi_0 pypi
diffusers 0.24.0 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
einops 0.7.0 pypi_0 pypi
envlight 0.1.0 pypi_0 pypi
exceptiongroup 1.2.0 pypi_0 pypi
fastapi 0.104.1 pypi_0 pypi
ffmpy 0.3.1 pypi_0 pypi
filelock 3.9.0 pypi_0 pypi
fonttools 4.46.0 pypi_0 pypi
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.12.0 pypi_0 pypi
ftfy 6.1.3 pypi_0 pypi
gitdb 4.0.11 pypi_0 pypi
gitpython 3.1.40 pypi_0 pypi
google-auth 2.24.0 pypi_0 pypi
google-auth-oauthlib 1.1.0 pypi_0 pypi
gradio 4.7.1 pypi_0 pypi
gradio-client 0.7.0 pypi_0 pypi
grpcio 1.59.3 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.2 pypi_0 pypi
httpx 0.25.2 pypi_0 pypi
huggingface-hub 0.19.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
imageio 2.33.0 pypi_0 pypi
imageio-ffmpeg 0.4.9 pypi_0 pypi
importlib-metadata 7.0.0 pypi_0 pypi
importlib-resources 6.1.1 pypi_0 pypi
jaxtyping 0.2.24 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.20.0 pypi_0 pypi
jsonschema-specifications 2023.11.2 pypi_0 pypi
kiwisolver 1.4.5 pypi_0 pypi
kornia 0.7.0 pypi_0 pypi
lazy-loader 0.3 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libigl 2.5.0 pypi_0 pypi
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lightning 2.1.2 pypi_0 pypi
lightning-utilities 0.10.0 pypi_0 pypi
lit 15.0.7 pypi_0 pypi
lxml 4.9.3 pypi_0 pypi
mapbox-earcut 1.0.1 pypi_0 pypi
markdown 3.5.1 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.8.2 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nerfacc 0.5.2 pypi_0 pypi
networkx 3.0 pypi_0 pypi
ninja 1.11.1.1 pypi_0 pypi
numpy 1.24.1 pypi_0 pypi
nvdiffrast 0.3.1 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
opencv-python 4.8.1.78 pypi_0 pypi
openssl 3.0.12 h7f8727e_0
orjson 3.9.10 pypi_0 pypi
packaging 23.2 pypi_0 pypi
pandas 2.1.3 pypi_0 pypi
pillow 9.3.0 pypi_0 pypi
pip 23.3.1 py310h06a4308_0
plyfile 1.0.2 pypi_0 pypi
protobuf 4.23.4 pypi_0 pypi
psutil 5.9.6 pypi_0 pypi
pyasn1 0.5.1 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycollada 0.7.2 pypi_0 pypi
pydantic 2.5.2 pypi_0 pypi
pydantic-core 2.14.5 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.17.2 pypi_0 pypi
pymcubes 0.1.4 pypi_0 pypi
pyparsing 3.1.1 pypi_0 pypi
python 3.10.13 h955ad1f_0
python-dateutil 2.8.2 pypi_0 pypi
python-multipart 0.0.6 pypi_0 pypi
pytorch-lightning 2.1.2 pypi_0 pypi
pytz 2023.3.post1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0
referencing 0.31.1 pypi_0 pypi
regex 2023.10.3 pypi_0 pypi
requests 2.28.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rich 13.7.0 pypi_0 pypi
rpds-py 0.13.2 pypi_0 pypi
rsa 4.9 pypi_0 pypi
rtree 1.1.0 pypi_0 pypi
safetensors 0.4.1 pypi_0 pypi
scikit-image 0.22.0 pypi_0 pypi
scipy 1.11.4 pypi_0 pypi
seaborn 0.13.0 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentencepiece 0.1.99 pypi_0 pypi
sentry-sdk 1.38.0 pypi_0 pypi
setproctitle 1.3.3 pypi_0 pypi
setuptools 68.0.0 py310h06a4308_0
shapely 2.0.2 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
simple-knn 0.0.0 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smmap 5.0.1 pypi_0 pypi
smplx 0.1.28 pypi_0 pypi
sniffio 1.3.0 pypi_0 pypi
sqlite 3.41.2 h5eee18b_0
starlette 0.27.0 pypi_0 pypi
svg-path 6.3 pypi_0 pypi
sympy 1.12 pypi_0 pypi
taming-transformers-rom1504 0.0.6 pypi_0 pypi
tensorboard 2.15.1 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tifffile 2023.9.26 pypi_0 pypi
timm 0.9.12 pypi_0 pypi
tinycudann 1.7 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.15.0 pypi_0 pypi
tomlkit 0.12.0 pypi_0 pypi
toolz 0.12.0 pypi_0 pypi
torch 2.0.1+cu117 pypi_0 pypi
torchaudio 2.0.2+cu117 pypi_0 pypi
torchmetrics 1.2.1 pypi_0 pypi
torchvision 0.15.2+cu117 pypi_0 pypi
tqdm 4.66.1 pypi_0 pypi
transformers 4.35.2 pypi_0 pypi
trimesh 3.21.7 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
typeguard 2.13.3 pypi_0 pypi
typer 0.9.0 pypi_0 pypi
typing-extensions 4.8.0 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.13 pypi_0 pypi
uvicorn 0.24.0.post1 pypi_0 pypi
wandb 0.16.0 pypi_0 pypi
wcwidth 0.2.12 pypi_0 pypi
websockets 11.0.3 pypi_0 pypi
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py310h06a4308_0
xatlas 0.0.8 pypi_0 pypi
xformers 0.0.23+246b45c.d20231204 pypi_0 pypi
xxhash 3.4.1 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yarl 1.9.3 pypi_0 pypi
zipp 3.17.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0

Why the size of avatars are the almost same??

I made the Lionel Messi, Lebron James,,,
But ply files are almost same size. And I also changed betas( Big Height for Lebron James) but Size was same. Are there module to optimize shape for specific characters?

报错self.dataset = self.trainer.val_dataloaders.dataset error 'list' object has no attribute 'dataset'

Epoch 0: : 100it [05:26, 3.27s/it, loss=202] Traceback (most recent call last): | 0/4 [00:00<?, ?it/s]
File "/home/syy/HumanGaussian-main/launch.py", line 239, in
main(args, extras)
File "/home/syy/HumanGaussian-main/launch.py", line 182, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1103, in _run
results = self._run_stage()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1182, in _run_stage
self._run_train()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1205, in _run_train
self.fit_loop.run()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.on_advance_end()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 250, in on_advance_end
self._run_validation()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 308, in _run_validation
self.val_loop.run()
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 152, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 132, in advance
self._on_evaluation_batch_start(**kwargs)
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 262, in _on_evaluation_batch_start
self.trainer._call_lightning_module_hook(hook_name, *kwargs.values())
File "/home/syy/.conda/envs/motionctrl/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1347, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/syy/HumanGaussian-main/threestudio/systems/base.py", line 156, in on_validation_batch_start
self.dataset = self.trainer.val_dataloaders.dataset
AttributeError: 'list' object has no attribute 'dataset'

在第100次it后报错,似乎和val_dataloaders有关,需要进行怎样的修改

how to make ply to obj or fbx?

hello,when i cmd launch.py ,i got a file of ply, but i want to show the file to blender or maya, could u give me some advise?
thanks!

wierd operation when running launch.py

when I followed the readme instructions to run python launch.py --config configs/test.yaml --train --gpu 0 system.prompt_processor.prompt="A boy with a beanie wearing a hoodie and joggers", folders and files were made infinitely:

"./HumanGaussian/experiments/e1/A_boy_with_a_beanie_wearing_a_hoodie_and_joggers@20231227-114217/code/experiments/e1/A_boy_with_a_beanie_wearing_a_hoodie_and_joggers@20231227-114217/code/experiments/e1/..."

until No space left on device.

render animation w/o gui

Hi,

It seems my linux cannot install dearpygui, I'm wondering is it possible to render the animation without the GUI?

Clarification on Size-Conditioned Gaussian Pruning

Hi @alvinliu0,

Firstly, I want to express my appreciation for sharing your exceptional work.

I have a question regarding the Size-Conditioned Gaussian Pruning technique outlined in your paper.
Specifically, I noted a statement mentioning that "such floating artifacts emerge as tiny size Gaussians during the 3DGS densification process of cloning and splitting."
However, upon reviewing the results of the ablation study, it appears that the pruning is primarily affecting large Gaussians rather than the smaller ones.

Moreover, upon examining the code here, it seems that the intention is to remove the larger Gaussians.

Could you please clarify this discrepancy?
I want to ensure I'm interpreting the methodology correctly.

Thank you.

Error in running the launch.py

I get the below error when I am launching the launch.py.

python launch.py --config configs/test.yaml --train --gpu 0 system.prompt_processor.prompt="A boy with a beanie wearing a hoodie and joggers"

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0)
Python 3.10.13 (you have 3.10.13)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Seed set to 0
[INFO] Using 16bit Automatic Mixed Precision (AMP)
[INFO] GPU available: True (cuda), used: True
[INFO] TPU available: False, using: 0 TPU cores
[INFO] IPU available: False, using: 0 IPUs
[INFO] HPU available: False, using: 0 HPUs
[INFO] You are using a CUDA device ('NVIDIA RTX A6000') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
[INFO] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Number of points at initialisation : 100000
[INFO]
| Name | Type | Params


0 Trainable params
0 Non-trainable params
0 Total params
0.000 Total estimated model params size (MB)
[INFO] Validation results will be saved to /save/name-of-this-experiment-run/A_boy_with_a_beanie_wearing_a_hoodie_and_joggers@20231224-034127/save
[INFO] Using prompt [A boy with a beanie wearing a hoodie and joggers] and negative prompt [shadow, dark face, colorful hands, eyeglass, glasses, (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck]
[INFO] Using view-dependent prompts [side]:[A boy with a beanie wearing a hoodie and joggers, side view] [front]:[A boy with a beanie wearing a hoodie and joggers, front view] [back]:[A boy with a beanie wearing a hoodie and joggers, back view] [overhead]:[A boy with a beanie wearing a hoodie and joggers, overhead view]
[INFO] Loading Texture-Structure Joint Model ...
The config attributes {'decay': 0.9999, 'inv_gamma': 1.0, 'min_decay': 0.0, 'optimization_step': 152000, 'power': 0.6666666666666666, 'update_after_step': 0, 'use_ema_warmup': False} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 111, in load_state_dict
return safetensors.torch.load_file(checkpoint_file, device="cpu")
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 122, in load_state_dict
raise ValueError(
ValueError: Unable to locate the file ./texture_structure_joint/unet_ema/config.json which is necessary to load this pretrained model. Make sure you have saved the model properly.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/HumanGaussian/launch.py", line 239, in
main(args, extras)
File "/HumanGaussian/launch.py", line 182, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 970, in _run
call._call_lightning_module_hook(self, "on_fit_start")
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 157, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/HumanGaussian/threestudio/systems/GaussianDreamer.py", line 314, in on_fit_start
self.guidance = threestudio.find(self.cfg.guidance_type)(self.cfg.guidance)
File "/HumanGaussian/threestudio/utils/base.py", line 63, in init
self.configure(*args, **kwargs)
File "/HumanGaussian/threestudio/models/guidance/dual_branch_guidance.py", line 97, in configure
unet = UNet2DConditionModel.from_pretrained(self.cfg.model_key, subfolder="unet_ema").to(self.weights_dtype)
File "/opt/conda/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 800, in from_pretrained
state_dict = load_state_dict(model_file, variant=variant)
File "/opt/conda/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 127, in load_state_dict
raise OSError(
OSError: Unable to load weights from checkpoint file for './texture_structure_joint/unet_ema/config.json' at './texture_structure_joint/unet_ema/config.json'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Kindly help me with this.

An error was reported while training the model with two 3090.

Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
[INFO] ----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes

[INFO] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
[INFO] LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
Traceback (most recent call last):
File "/home/shengbo/HumanGaussian-main/launch.py", line 239, in
main(args, extras)
File "/home/shengbo/HumanGaussian-main/launch.py", line 182, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
Traceback (most recent call last):
File "/home/shengbo/HumanGaussian-main/launch.py", line 239, in
main(args, extras)
File "/home/shengbo/HumanGaussian-main/launch.py", line 182, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
call._call_and_handle_interrupt(
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 105, in launch
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 105, in launch
return function(*args, **kwargs)
return function(*args, **kwargs)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
self._run(model, ckpt_path=ckpt_path)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 963, in _run
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 963, in _run
self.strategy.setup(self)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 171, in setup
self.strategy.setup(self)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 171, in setup
self.configure_ddp()
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 283, in configure_ddp
self.configure_ddp()
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 283, in configure_ddp
self.model = self._setup_model(self.model)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 195, in _setup_model
self.model = self._setup_model(self.model)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 195, in _setup_model
return DistributedDataParallel(module=model, device_ids=device_ids, **self._ddp_kwargs)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 678, in init
return DistributedDataParallel(module=model, device_ids=device_ids, **self._ddp_kwargs)
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 678, in init
self._log_and_throw(
self._log_and_throw(
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1037, in _log_and_throw
File "/home/shengbo/anaconda3/envs/humangs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1037, in _log_and_throw
raise err_type(err_msg)
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.
raise err_type(err_msg)
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.

AMD GPU

is there anyway to run this using an amd GPU?

HuggingFace用不了怎么办?

requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /stabilityai/stable-diffusion-2-base/resolve/main/tokenizer/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f80afce3400>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 33cda6e5-6601-4ed8-abcc-296d8b9a2b32)')

huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

有什么方法可以解决一下吗?

CUDA out of memory

Following is the detailed error:
Epoch 0: | | 0/? [00:00<?, ?it/s]Traceback (most recent call last):
File "launch.py", line 239, in
main(args, extras)
File "launch.py", line 182, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
results = self._run_stage()
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1033, in _run_stage
self.fit_loop.run()
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
self.advance()
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 250, in advance
batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 190, in run
self._optimizer_step(batch_idx, closure)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 268, in _optimizer_step
call._call_lightning_module_hook(
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 157, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1303, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 152, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 239, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/amp.py", line 80, in optimizer_step
closure_result = closure()
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 144, in call
self._result = self.closure(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 129, in closure
step_output = self._step_fn()
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 318, in _training_step
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 391, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/home/yejr/Digital_Avater/HumanGaussian-main/threestudio/systems/GaussianDreamer.py", line 340, in training_step
guidance_out = self.guidance(
File "/home/yejr/Digital_Avater/HumanGaussian-main/threestudio/models/guidance/dual_branch_guidance.py", line 770, in call
midas_depth_latents = self.encode_images(midas_depth_BCHW_512.to(self.weights_dtype))
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
File "/home/yejr/Digital_Avater/HumanGaussian-main/threestudio/models/guidance/dual_branch_guidance.py", line 243, in encode_images
posterior = self.vae.encode(imgs.to(self.weights_dtype)).latent_dist
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 260, in encode
h = self.encoder(x)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/diffusers/models/autoencoders/vae.py", line 172, in forward
sample = down_block(sample)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1465, in forward
hidden_states = resnet(hidden_states, temb=None)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/diffusers/models/resnet.py", line 371, in forward
hidden_states = self.conv2(hidden_states)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/media/data4/yejr/conda_env/HumanGaussian/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 23.70 GiB total capacity; 21.84 GiB already allocated; 6.56 MiB free; 22.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I use one RTX 3090 to train, How should I do?

Why the skeleton image and the image rendered by gassian are in different camera view?

To be clear, when i use the code
viewpoint_cam = Camera(c2w = batch['c2w'][id],FoVy = batch['fovy'][id],height = batch['height'],width = batch['width']) render_pkg = render(viewpoint_cam, self.gaussian, self.pipe, renderbackground) image, viewspace_point_tensor, radii = render_pkg["render"], render_pkg["viewspace_points"], render_pkg["radii"]

I will get a image like this
it0_train_pose

However, when I use pyrender to generate image using code
self.nc.camera.yfov=batch['fovy'][id] self.scene_smplx.set_pose(self.nc, pose=batch['c2w'][id].cpu().numpy()) depth_smplx = self.renderer_smplx.render(self.scene_smplx, flags=pyrender.RenderFlags.DEPTH_ONLY)

or using MVP matrix to project 3d points to image directly using code
points = self.points3D @ mvp.T # [18, 4] points = points[:, :3] / points[:, 3:] # NDC in [-1, 1] xs = (points[:, 0] + 1) / 2 * H # [18] ys = (points[:, 1] + 1) / 2 * W # [18]
I get images like this,
it0_depth_smplx
it0_skeleton
It is clear that, unlike the first image, the second and third images seem to move horizontally.
Do you have any idea what is happening? thanks a lot!

The problem of view dependent prompt

It appears that the SMPLX coordinate system is not aligned with the threestudio coordinate system, resulting in incorrect view-dependent prompts during generation. For instance, an azimuth angle of around -180 should represent a side view, but it currently also corresponds to a back view.

problem
view

There is a problem trying to resume the model

Dear author, thank you for your excellent work!
I noticed that the work was based on threestudio, so I recovered the model from the pre-trained model as threestudio did, but got an error. Here is the code I ran and the results I got.
Input:
python launch.py --config configs/test.yaml --train --gpu 0 system.prompt_processor.prompt="A boy with a beanie wearing a hoodie and joggers" trainer.max_steps=7200
Output:
back (most recent call last): File "launch.py", line 239, in <module> main(args, extras) File "launch.py", line 182, in main trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit call._call_and_handle_interrupt( File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 989, in _run results = self._run_stage() File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1035, in _run_stage self.fit_loop.run() File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 202, in run self.advance() File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 359, in advance self.epoch_loop.run(self._data_fetcher) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 136, in run self.advance(data_fetcher) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 240, in advance batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 187, in run self._optimizer_step(batch_idx, closure) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 265, in _optimizer_step call._call_lightning_module_hook( File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 157, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1291, in optimizer_step optimizer.step(closure=optimizer_closure) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 151, in step step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 230, in optimizer_step return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/amp.py", line 93, in optimizer_step step_output = self.scaler.step(optimizer, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 370, in step retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 290, in _maybe_opt_step retval = optimizer.step(*args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/optim/optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/optim/adam.py", line 141, in step adam( File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/optim/adam.py", line 281, in adam func(params, File "/root/autodl-tmp/SunXiaoKun/Others/anaconda3/envs/cd118py38tc200/lib/python3.8/site-packages/torch/optim/adam.py", line 446, in _multi_tensor_adam torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1) RuntimeError: The size of tensor a (542541) must match the size of tensor b (100000) at non-singleton dimension 0 Epoch 0: | | 0/? [00:03<?, ?it/s]

Is this because I am doing something wrong or is this feature not currently supported?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.