Git Product home page Git Product logo

Comments (10)

wangrun20 avatar wangrun20 commented on July 28, 2024

In my series of experiments, the face selfie videos perform the best and most stable.

from hypernerf.

mengzhaoZ avatar mengzhaoZ commented on July 28, 2024

In my series of experiments, the face selfie videos perform the best and most stable.

Thank you for your reply,my data set is the endoscope data set, I don't know whether it is applicable, and is there any requirement for FPS in video segmentation? My data camera moves slowly, will it affect the quality of training?

from hypernerf.

wangrun20 avatar wangrun20 commented on July 28, 2024

When preparing datasets for HyperNeRF, it captures some frames (default: 100 frames) from the video at equal intervals, then reconstructs on the basis of these frames.
So I think the key to good reconstruction performance is not high frame rate or camera moving speed, but enough pictures from different observation angles.

from hypernerf.

mengzhaoZ avatar mengzhaoZ commented on July 28, 2024

Thank you for your answer. I see what you mean. I think you're right. My data is different from the face data. It is not a relatively fixed rotating object, but a video image taken in the abdominal cavity. Maybe my video is not suitable for this method, so I will find another solution to see if it can be solved. Thanks again for your reply!

from hypernerf.

wangrun20 avatar wangrun20 commented on July 28, 2024

Thank you for your answer. I see what you mean. I think you're right. My data is different from the face data. It is not a relatively fixed rotating object, but a video image taken in the abdominal cavity. Maybe my video is not suitable for this method, so I will find another solution to see if it can be solved. Thanks again for your reply!

You're welcome. Wish you success!

from hypernerf.

Zvyozdo4ka avatar Zvyozdo4ka commented on July 28, 2024

In my series of experiments, the face selfie videos perform the best and most stable.

I tried both selfie, and back camera experiments. In both cases the performance is disappointing

Is there a specific requirement how to take video?

output_hypernerf4_v1.mp4
output.mp4

from hypernerf.

wangrun20 avatar wangrun20 commented on July 28, 2024

I tried both selfie, and back camera experiments. In both cases the performance is disappointing

I said "In my series of experiments, the face selfie videos perform the best and most stable". Actually, what I meant is that the results for portrait selfies are slightly better than for object motion (like pouring water or moving things), not that the portrait selfies are extremely realistic. My reconstructing performance of portrait selfies is the same as your two videos. I think that even now, reconstructing 3D structures from a monocular camera is still very challenging, and I suspect that the authors of the HyperNeRF paper likely used some tricks that were not explicitly mentioned in the paper.

I think it is still very difficult to achieve the performance claimed by the author on our own dataset. It requires some tricks that I am not aware of.

from hypernerf.

Zvyozdo4ka avatar Zvyozdo4ka commented on July 28, 2024

@wangrun20 thank you very much for your explanation!

from hypernerf.

doubi-killer avatar doubi-killer commented on July 28, 2024

In my series of experiments, the face selfie videos perform the best and most stable.

I tried both selfie, and back camera experiments. In both cases the performance is disappointing

Is there a specific requirement how to take video?

output_hypernerf4_v1.mp4
output.mp4

how can i prepare a face selfie dataset? the colmap didn't run properly

from hypernerf.

Zvyozdo4ka avatar Zvyozdo4ka commented on July 28, 2024

how can i prepare a face selfie dataset? the colmap didn't run properly

i tried their suggested colab code to preprocess video, but there were some issues to run it, i changed it a little bit to make it work. Here is their solution:
https://colab.research.google.com/github/google/nerfies/blob/main/notebooks/Nerfies_Capture_Processing.ipynb

Here is how i did it, almost same, but as i remember there were the problem with colmap:
https://colab.research.google.com/drive/1wZolsOwdYyo1xVli1cCS1bkC0JeII-CF?usp=sharing

As i remember, it must be --output_path in reconstruction module, instead of --export_path

from hypernerf.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.