Git Product home page Git Product logo

mvedit's People

Contributors

lakonik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mvedit's Issues

Top and bottom views

Due to the limited vertical angles when using Zero123 the top and bottom views can suffer in both the mesh and the texture.

Bad texture example:
image

Bad mesh example:
image

Would it be possible to have more control via the UI?

Thanks!

API is not working

Hello :)

Thanks for your work and efforts.

I'm trying to generate 3D from 2D images using API that you provide.
To this end, I launch gradio Web UI following the instruction.

python app.py --empty-cache --share

Then, I edit line 10 of the code that you provide for running API with my own URL.

import os
import shutil
import tqdm
from gradio_client import Client

in_dir = 'MY_OWN_IMAGE_DIR'
out_dir = 'exp'
os.makedirs(out_dir, exist_ok=True)

client = Client('MY_OWN_URL')  # Use your own URL here

for img_name in tqdm.tqdm(os.listdir(in_dir)):
    img_path = os.path.join(in_dir, img_name)
    seed = 42

    seg_result = client.predict(
        img_path,
        api_name='/image_segmentation')

    zero123_result = client.predict(
        seed,
        seg_result,
        api_name='/img_to_3d_1_2_zero123plus')

    # output path to the .glb mesh
    mvedit_result = client.predict(
        seed,
        seg_result,
        '',  # 'Prompt' Textbox component
        '',  # 'Negative prompt' Textbox component
        'DPMSolverMultistep',  # 'Sampling method' Dropdown component
        24,  # 'Sampling steps' Slider component
        0.5,  # 'Denoising strength' Slider component
        False,  # 'Random initialization' Checkbox component
        7,  # 'CFG scale' Slider component
        True,  # 'Texture super-resolution' Checkbox component
        'DPMSolverSDEKarras',  # 'Sampling method' Dropdown component (texture super-resolution)
        24,  # 'Sampling steps' Slider component (texture super-resolution)
        0.4,  # 'Denoising strength' Slider component (texture super-resolution)
        False,  # 'Random initialization' Checkbox component (texture super-resolution)
        7,  # 'CFG scale' Slider component (texture super-resolution)
        *zero123_result,
        api_name='/img_to_3d_1_2_zero123plus_to_mesh')

    shutil.move(mvedit_result, os.path.join(out_dir, os.path.splitext(img_name)[0] + '.glb'))

Finally, I run the API code.
While SAM and Zero1-to-3++ run well, I observe that zero123plus_to_mesh fails to run.

Below is error log.

Running Zero123++ generation with seed 42...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.50it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 10.08it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.59it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 10.06it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.55it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 10.02it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.44it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.93it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.34it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.90it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.35it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.88it/s]
Zero123++ generation finished.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/opt/conda/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/opt/conda/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/opt/conda/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/opt/conda/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/opt/conda/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
    response = await func(request)
  File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/opt/conda/lib/python3.10/site-packages/gradio/routes.py", line 781, in upload_file
    form = await multipart_parser.parse()
  File "/opt/conda/lib/python3.10/site-packages/gradio/route_utils.py", line 527, in parse
    async for chunk in self.stream:
  File "/opt/conda/lib/python3.10/site-packages/starlette/requests.py", line 238, in stream
    raise ClientDisconnect()
starlette.requests.ClientDisconnect

I think this error is caused by javascript. Unfortunately, I'm not familiar with it.
Thus, I hope you to check the API code and fix the error.

Thanks! :)

Collect all loading data

Hi! I'm trying to build a docker image with all necessary data to run it without Internet connection. Can you please provide a list of all downloading checkpoints and its paths to copy them into image?

Training

Excellent Work! Can you tell me how much computing resources are spent on training the model you are using now?

License

HI! This is great work.

Request you to add the license as well. Unable to figure out if it is MIT license or not. Thanks :D

CLI Runtime

Hey guys,

Really loving this project so far, lots of tools to play around with which is always fun!

Any chance for a straightforward way to run some pipelines via CLI? Would be greatly appreciated.

For Windows user

if you are on windows, you may need to the following steps to run it locally after created conda env:

  1. remove triton from requirements.txt
  2. download the trition whl from https://huggingface.co/madbuda/triton-windows-builds, by the default this repo depends on python 3.10
  3. manually pip install this whl file
  4. conda install cryptacular

for me, I could run it successfully with my Windows 10, 3090:
image
Enjoy!

local variable 'num_keep_views' referenced before assignment

Im trying to run mvedit by python, and getting this error on the last step - runner.run_zero123plus1_2_to_mesh(seed, img_segm, *args):


UnboundLocalError Traceback (most recent call last)
Cell In[15], line 1
----> 1 glb_path = runner.run_zero123plus1_2_to_mesh(42, img_segm, *args)

File ~/shares/SR004.nfs2/fominaav/3D/MVEdit/lib/apis/mvedit.py:49, in _api_wrapper..wrapper(*args, **kwargs)
47 torch.set_grad_enabled(False)
48 torch.backends.cuda.matmul.allow_tf32 = True
---> 49 ret = func(*args, **kwargs)
50 gc.collect()
51 if self.empty_cache:

File ~/shares/SR004.nfs2/fominaav/3D/MVEdit/lib/apis/mvedit.py:841, in MVEditRunner.run_zero123plus1_2_to_mesh(self, seed, in_img, cache_dir, *args, **kwargs)
838 intrinsics = torch.cat([in_intrinsics[None, :], intrinsics[None, :].expand(camera_poses.size(0), -1)], dim=0)
839 camera_poses = torch.cat([in_pose[None, :3], camera_poses], dim=0)
--> 841 out_mesh, ingp_states = self.proc_nerf_mesh(
842 pipe, seed, nerf_mesh_kwargs, superres_kwargs, init_images=init_images, normals=init_normals,
843 camera_poses=camera_poses, intrinsics=intrinsics, intrinsics_size=intrinsics_size,
844 cam_weights=[2.0] + [1.1, 0.95, 0.9, 0.85, 1.0, 1.05] * 6, seg_padding=96,
845 keep_views=[0], ip_adapter=self.ip_adapter, use_reference=True, use_normal=True)
847 if superres_kwargs['do_superres']:
848 self.load_stable_diffusion(superres_kwargs['checkpoint'])

File ~/shares/SR004.nfs2/fominaav/3D/MVEdit/lib/apis/mvedit.py:447, in MVEditRunner.proc_nerf_mesh(self, pipe, seed, nerf_mesh_kwargs, superres_kwargs, front_azi, camera_poses, use_reference, use_normal, **kwargs)
443 set_random_seed(seed, deterministic=True)
444 prompts = nerf_mesh_kwargs['prompt'] if front_azi is None
445 else [join_prompts(nerf_mesh_kwargs['prompt'], view_prompt)
446 for view_prompt in view_prompts(camera_poses, front_azi)]
--> 447 out_mesh, ingp_states = pipe(
448 prompt=prompts,
449 negative_prompt=nerf_mesh_kwargs['negative_prompt'],
450 camera_poses=camera_poses,
451 use_reference=use_reference,
452 use_normal=use_normal,
453 guidance_scale=nerf_mesh_kwargs['cfg_scale'],
454 num_inference_steps=nerf_mesh_kwargs['steps'],
455 denoising_strength=None if nerf_mesh_kwargs['random_init'] else nerf_mesh_kwargs['denoising_strength'],
456 patch_size=nerf_mesh_kwargs['patch_size'],
457 patch_bs=nerf_mesh_kwargs['patch_bs'],
458 diff_bs=nerf_mesh_kwargs['diff_bs'],
459 render_bs=nerf_mesh_kwargs['render_bs'],
460 n_inverse_rays=nerf_mesh_kwargs['patch_size'] ** 2 * nerf_mesh_kwargs['patch_bs_nerf'],
461 n_inverse_steps=nerf_mesh_kwargs['n_inverse_steps'],
462 init_inverse_steps=nerf_mesh_kwargs['init_inverse_steps'],
463 tet_init_inverse_steps=nerf_mesh_kwargs['tet_init_inverse_steps'],
464 default_prompt=nerf_mesh_kwargs['aux_prompt'],
465 default_neg_prompt=nerf_mesh_kwargs['aux_negative_prompt'],
466 alpha_soften=nerf_mesh_kwargs['alpha_soften'],
467 normal_reg_weight=lambda p: nerf_mesh_kwargs['normal_reg_weight'] * (1 - p),
468 entropy_weight=lambda p: nerf_mesh_kwargs['start_entropy_weight'] + (
469 nerf_mesh_kwargs['end_entropy_weight'] - nerf_mesh_kwargs['start_entropy_weight']) * p,
470 bg_width=nerf_mesh_kwargs['entropy_d'],
471 mesh_normal_reg_weight=nerf_mesh_kwargs['mesh_smoothness'],
472 lr_schedule=lambda p: nerf_mesh_kwargs['start_lr'] + (
473 nerf_mesh_kwargs['end_lr'] - nerf_mesh_kwargs['start_lr']) * p,
474 tet_resolution=nerf_mesh_kwargs['tet_resolution'],
475 bake_texture=not superres_kwargs['do_superres'],
476 prog_bar=gr.Progress().tqdm,
477 out_dir=self.out_dir_3d,
478 save_interval=self.save_interval,
479 save_all_interval=self.save_all_interval,
480 mesh_reduction=128 / nerf_mesh_kwargs['tet_resolution'],
481 max_num_views=partial(
482 default_max_num_views,
483 start_num=nerf_mesh_kwargs['max_num_views'],
484 mid_num=nerf_mesh_kwargs['max_num_views'] // 2),
485 debug=self.debug,
486 **kwargs
487 )
488 return out_mesh, ingp_states

File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)

File ~/shares/SR004.nfs2/fominaav/3D/MVEdit/lib/pipelines/mvedit_3d_pipeline.py:1323, in MVEdit3DPipeline.call(self, prompt, negative_prompt, in_model, ingp_states, init_images, cond_images, extra_control_images, normals, nerf_code, density_grid, density_bitfield, camera_poses, intrinsics, intrinsics_size, use_reference, use_normal, cam_weights, keep_views, guidance_scale, num_inference_steps, denoising_strength, progress_to_dmtet, tet_resolution, patch_size, patch_bs, diff_bs, render_bs, n_inverse_rays, n_inverse_steps, init_inverse_steps, tet_init_inverse_steps, seg_padding, ip_adapter, tile_weight, depth_weight, blend_weight, lr_schedule, lr_multiplier, render_size_p, max_num_views, depth_p_weight, patch_rgb_weight, patch_normal_weight, entropy_weight, alpha_soften, normal_reg_weight, mesh_normal_reg_weight, ambient_light, mesh_reduction, mesh_simplify_texture_steps, dt_gamma_scale, testmode_dt_gamma_scale, bg_width, ablation_nodiff, debug, out_dir, save_interval, save_all_interval, default_prompt, default_neg_prompt, bake_texture, map_size, prog_bar)
1320 batch_scheduler = [deepcopy(self.scheduler) for _ in range(num_cameras)]
1322 else:
-> 1323 max_num_cameras = max(int(round(max_num_views(progress, progress_to_dmtet))), num_keep_views)
1324 if max_num_cameras < num_cameras:
1325 keep_ids = torch.arange(num_cameras, device=device)

UnboundLocalError: local variable 'num_keep_views' referenced before assignment

MY CODE

import os
import sys

sys.path.append(os.path.abspath(os.path.join(__file__, '../')))
if 'OMP_NUM_THREADS' not in os.environ:
    os.environ['OMP_NUM_THREADS'] = '16'

import shutil
import os.path as osp
import argparse
import torch
import gradio as gr
from functools import partial
from lib.core.mvedit_webui.shared_opts import send_to_click
from lib.core.mvedit_webui.tab_img_to_3d import create_interface_img_to_3d
from lib.core.mvedit_webui.tab_3d_to_3d import create_interface_3d_to_3d
from lib.core.mvedit_webui.tab_text_to_img_to_3d import create_interface_text_to_img_to_3d
from lib.core.mvedit_webui.tab_retexturing import create_interface_retexturing
from lib.core.mvedit_webui.tab_3d_to_video import create_interface_3d_to_video
from lib.core.mvedit_webui.tab_stablessdnerf_to_3d import create_interface_stablessdnerf_to_3d
from lib.apis.mvedit import MVEditRunner
from lib.version import __version__
from collections import OrderedDict
import random

DEBUG_SAVE_INTERVAL = {
    0: None,
    1: 4,
    2: 1}

torch.set_grad_enabled(False)
runner = MVEditRunner(
    device=torch.device('cuda'),
    local_files_only=False,
    unload_models=False,
    out_dir='viz',
    save_interval=DEBUG_SAVE_INTERVAL[0],
    save_all_interval=1 if DEBUG_SAVE_INTERVAL[0] == 2 else None,
    dtype=torch.float16,
    debug=False,
    no_safe=False
)

seed = random.randint(0, 2**31)

out_img = runner.run_text_to_img(seed, 512, 512, 'red car', '', 'DPMSolverMultistep', 32, 7, 
                                 'Lykon/dreamshaper-8', 'best quality, sharp focus, photorealistic, extremely detailed', 
                                 'worst quality, low quality, depth of field, blurry, out of focus, low-res, illustration, painting, drawing', {})

img_segm = runner.run_segmentation(out_img)
init_images = runner.run_zero123plus1_2(seed, img_segm)

nerf_mesh_args = OrderedDict([
    ('prompt', 'red car'),
    ('negative_prompt', ''),
    ('scheduler', 'DPMSolverMultistep'),
    ('steps', 24),
    ('denoising_strength', 0.5),
    ('random_init', False),
    ('cfg_scale', 7),
    ('checkpoint', 'runwayml/stable-diffusion-v1-5'),
    ('max_num_views', 32),
    ('aux_prompt', 'best quality, sharp focus, photorealistic, extremely detailed'),
    ('aux_negative_prompt', 'worst quality, low quality, depth of field, blurry, out of focus, low-res, '
                            'illustration, painting, drawing'),
    ('diff_bs', 4),
    ('patch_size', 128),
    ('patch_bs_nerf', 1),
    ('render_bs', 6),
    ('patch_bs', 8),
    ('alpha_soften', 0.02),
    ('normal_reg_weight', 4.0),
    ('start_entropy_weight', 0.0),
    ('end_entropy_weight', 4.0),
    ('entropy_d', 0.015),
    ('mesh_smoothness', 1.0),
    ('n_inverse_steps', 96),
    ('init_inverse_steps', 720),
    ('tet_init_inverse_steps', 120),
    ('start_lr', 0.01),
    ('end_lr', 0.005),
    ('tet_resolution', 128)])

superres_defaults = OrderedDict([
    ('do_superres', True),
    ('scheduler', 'DPMSolverSDEKarras'),
    ('steps', 24),
    ('denoising_strength', 0.4),
    ('random_init', False),
    ('cfg_scale', 7),
    ('checkpoint', 'runwayml/stable-diffusion-v1-5'),
    ('aux_prompt', 'best quality, sharp focus, photorealistic, extremely detailed'),
    ('aux_negative_prompt', 'worst quality, low quality, depth of field, blurry, out of focus, low-res, '
                            'illustration, painting, drawing'),
    ('patch_size', 512),
    ('patch_bs', 1),
    ('n_inverse_steps', 48),
    ('start_lr', 0.01),
    ('end_lr', 0.01)])

sr_args = list(superres_defaults.values())
nerf_mesh_args = list(nerf_mesh_args.values())
args = []
args.extend(nerf_mesh_args)
args.extend(sr_args)
args.extend(init_images)
args.extend({})

glb_path = runner.run_zero123plus1_2_to_mesh(seed, img_segm, *args)

can you please help me to run it correctly

The web demo isn't working

Hi there! Nice work but the web demo is not working anymore.
I get this error:

upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination
Failed to load resource: the server responded with a status of 503 ()

Possible to provide multiview images as inputs?

Hi, can i give the multiview images by myself and then ask the MVedit to generate the 3Dmodel instead of relying on generating all the different views from a single image and then expecting the model to generate? Is there any specific type of input image angles required for this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.