Git Product home page Git Product logo

metrical-tracker's People

Contributors

edhyah avatar zielon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metrical-tracker's Issues

how to get the best identity.npy

Thanks for your wonderful work, I have a question about how to get the best identity.npy file in one video, please ask how to get the best identity.npy in one video?

Pyr Levels Question

If you change the pyr levels in the actor's yml file, does it override the ones in the config.py? Or do you need to set both?

UV_texture

I want to know how to obtain UV texture. I can only get the rendered image, and I don't know where to find colored vertices? I am a beginner and look forward to receiving your reply. Thank you.

About output jaw

I have read the paper on MICA, but I am somewhat unclear about the meaning of the jaw coefficients in the output results. Can I interpret them as representing lip movements? Are the exp coefficients standard FLAME coefficients? If so, do they not include actions of the lips/jaw?

zipfile.BadZipFile: Bad CRC-32 for file 'tex_dir.npy'

When I try to run python tracker.py --cfg ./configs/actors/duda.yml, I met error like :
Traceback (most recent call last):
File "/root/autodl-fs/tracker_hjh/tracker.py", line 764, in
ff = Tracker(config, device='cuda:0')
File "/root/autodl-fs/tracker_hjh/tracker.py", line 104, in init
self.setup_renderer()
File "/root/autodl-fs/tracker_hjh/tracker.py", line 121, in setup_renderer
self.flametex = FLAMETex(self.config).to(self.device)
File "/root/autodl-fs/tracker_hjh/flame/FLAME.py", line 294, in init
texture_basis = tex_space[pc_key].reshape(-1, n_pc)
File "/root/miniconda3/envs/tracker/lib/python3.9/site-packages/numpy/lib/npyio.py", line 253, in getitem
return format.read_array(bytes,
File "/root/miniconda3/envs/tracker/lib/python3.9/site-packages/numpy/lib/format.py", line 812, in read_array
data = _read_bytes(fp, read_size, "array data")
File "/root/miniconda3/envs/tracker/lib/python3.9/site-packages/numpy/lib/format.py", line 947, in _read_bytes
r = fp.read(size - len(data))
File "/root/miniconda3/envs/tracker/lib/python3.9/zipfile.py", line 924, in read
data = self._read1(n)
File "/root/miniconda3/envs/tracker/lib/python3.9/zipfile.py", line 1014, in _read1
self._update_crc(data)
File "/root/miniconda3/envs/tracker/lib/python3.9/zipfile.py", line 942, in _update_crc
raise BadZipFile("Bad CRC-32 for file %r" % self.name)
zipfile.BadZipFile: Bad CRC-32 for file 'tex_dir.npy'
How can I fix this error? And what is ’tex_dir.npy'? Thanks!

Speed > Quality Tuning

Aside from the pyr levels are there other settings we can tweak to squeeze out more performance at the sacrifice of a perfect track?

BFM License

Is there any way to strip out any use of the Basel Face Model from Metrical Tracker and MICA? The cost of the BFM license is just too much, and I’d love to be able to use metrical tracker commercially.

kpt_dense

lmk_path_dense = imagepath.replace('images', 'kpt_dense').replace('.png', '.npy').replace('.jpg', '.npy'). How do I get this data file?

about running tracker.py

Hello, thanks for your awesome work.

I'm having trouble on running tracker.py.

def parse_batch(self, batch):
images = batch['image']
landmarks = batch['lmk']
landmarks_dense = batch['dense_lmk']

lmk_dense_mask = ~(landmarks_dense.sum(2, keepdim=True) == 0)
lmk_mask = ~(landmarks.sum(2, keepdim=True) == 0)

left_iris = landmarks_dense[:, left_iris_mp, :]
right_iris = landmarks_dense[:, right_iris_mp, :]
mask_left_iris = lmk_dense_mask[:, left_iris_mp, :]
mask_right_iris = lmk_dense_mask[:, right_iris_mp, :]

batch['left_iris'] = left_iris
batch['right_iris'] = right_iris
batch['mask_left_iris'] = mask_left_iris
batch['mask_right_iris'] = mask_right_iris

return images, landmarks, landmarks_dense[:, mediapipe_idx, :2], lmk_dense_mask[:, mediapipe_idx, :], lmk_mask

landmarks_dense.shape = [1, 68, 2], which seems like 68 landmarks.
However,
left_iris_mp = [468, 469, 470, 471, 472]
right_iris_mp = [473, 474, 475, 476, 477] and
mediapipe_idx = [276. 282, ... ]

left_iris_mp , right_iris_mp , and mediapipe_idx exceeds the index of landmarks_dense.
Am I doing something wrong? Please give me any suggestion if possible.

Thank you in advance.

About flame parameter

Can we obtain the rotation and translation of the flame model, as well as the parameters of the neck?
res

Is optimize color function necessary?

Thanks for your great work!
I just wanna try a rough tracking, but the time of optimizing color is too long. Is it necessary using this function? Can I delete it when run the track.py?

EyeLid BlendShapes

Thanks for sharing this code :) What kind of movement do the eyelid blendshapes encode ? Is it eye closure for each eye ?

Vertices coordinate

How can I get the vertices coordinate of the flame head model ?
The vertices in tracker.py#L482 is a normalized result which can not be mapped to the image.
I amplify this vertices by multiply it by 512 which gives the results like this:
test

How can I get the vertices coordinate which aligned to the face like this:
0002

about pose

How to control the pose, always in the center, and can be reverted back to the video pose

cuda 11 support (?) _ZNK2at6Tensor7is_cudaEv

(tracker) yc@razer3080ti:~/testing3dai/mica/metrical-tracker/metrical-tracker$ python tracker.py --cfg ./configs/actors/duda.yml
Traceback (most recent call last):
File "tracker.py", line 31, in
from pytorch3d.io import load_obj
File "/home/yc/anaconda3/lib/python3.7/site-packages/pytorch3d/io/init.py", line 4, in
from .obj_io import load_obj, load_objs_as_meshes, save_obj
File "/home/yc/anaconda3/lib/python3.7/site-packages/pytorch3d/io/obj_io.py", line 14, in
from pytorch3d.renderer import TexturesAtlas, TexturesUV
File "/home/yc/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/init.py", line 3, in
from .blending import (
File "/home/yc/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/blending.py", line 9, in
from pytorch3d import _C
ImportError: /home/yc/anaconda3/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor7is_cudaEv

Multiple GPUs?

Would metrical tracker go faster if I had multiple GPUs installed?

Long time to generate for 1000 frames

optimize color is taking too much time. it took me 1 hr time for 5 second video (~100 frames) on a 6GB GPU

I am planning to use the tracker for the INSTA repo. To generate for 1000 frames, it will take me 10 hours.

any ideas on how to improve speed? @Zielon

from whihc frame to compite identity.npy

"identity.npy" is needed to be computed from MICA. But MICA produces different "identity.npy" files equal the number of frames. Which one to choose for the tracker? Thanks!

UnpicklingError: the string opcode argument must be quoted

I'm on Windows 11, Ubuntu on WSL. I installed using the shell script, and then manually downloaded and placed the buffalo and antelope zip files.

ERROR MESSAGE

(MICA) eric@CPFX02:/mnt/c/Users/Eric/Documents/ML/MICA$ python demo.py
Traceback (most recent call last):
File "/mnt/c/Users/Eric/Documents/ML/MICA/demo.py", line 156, in
main(cfg, args)
File "/mnt/c/Users/Eric/Documents/ML/MICA/demo.py", line 109, in main
mica = util.find_model_using_name(model_dir='micalib.models', model_name=cfg.model.name)(cfg, device)
File "/mnt/c/Users/Eric/Documents/ML/MICA/micalib/models/mica.py", line 35, in init
super(MICA, self).init(config, device, tag)
File "/mnt/c/Users/Eric/Documents/ML/MICA/micalib/base_model.py", line 40, in init
self.masking = Masking(config)
File "/mnt/c/Users/Eric/Documents/ML/MICA/utils/masking.py", line 47, in init
ss = pickle.load(f, encoding="latin1")
_pickle.UnpicklingError: the STRING opcode argument must be quoted

Question about Dense Landmark Indices in Metrical-Tracker with BFM Model

I'm currently working on adapting the Metrical-Tracker for use with the BFM (3D Morphable Face Model) and have a question regarding the process of obtaining dense landmark indices. In the tracker.py file, I noticed the use of the mediapipe_landmark_embedding.npz file to get indices for landmarks corresponding to the FLAME model.

Specifically, I'm trying to understand the steps involved in using MediaPipe to determine indices for dense landmarks in FLAME when integrating the tracker with the BFM model. During optimization, I'm facing difficulties in obtaining the vertex indices for lmkMP (dense landmarks) in the BFM mesh.

About Key Frame Selection and Identity Selection

Hi, thank you for your amazing work!
I have some questions, described concretely below.

  • Key frame selection: What is the importance of choosing a neutral face? Does the face have to keep both the pose and expression neutral or just the pose (mentioned in readme.md)? What if a choose a frame with a large expression variation or pose variation e.g. laughing, as the key frame? Does it affects the accuracy of the extrinsic estimation or something else?
  • Is it important to choose a neutral face for extracting identity.npy with MICA ?

Thanks in advance, looking forward to your reply !

Image num not_equal_to checkpoint/*.frame

hello, thank you for share this code. I have a question, found the source image num is not equal to checkpoint/*.frame num. For example, duda dataset has 255 images, but output only has 254 .frame. So which image correspond to which .frame? Thank you.

rotation matrix

Why the value in the returned camera parameter's rotation matrix is greater than 1?

{'R': array([[-1.4803067 , -0.02659582, 0.13538209, -0.4449267 , 1.4254882 ,
-0.05706715]], dtype=float32), 't': array([[-0.00678921, 0.02268134, 1.114467 ]], dtype=float32), 'fl': array([[9.188737]], dtype=float32), 'pp': array([[-0.00433624, 0.02280937]], dtype=float32)}

Missing landmark embedding & uv template files

Thanks for releasing long awaited face tracking.
I noticed the following files are missing.

'data/uv_template.obj'
'data/landmark_embedding.npy'
'data/uv_mask_eyes.png'

Can you kindly provide guidelines how to generate/get those files?

Camera parameters

Dear author, I have three questions to ask you.
1、May I ask if the camera parameters returned are camera parameters for each frame?
2、What is the model obtained after training used for?
3、How to obtain corresponding camera parameters from photos?

Why apply gaussian blur to pyramid images?

I'm confued about the gaussian blur operation applied during get_gaussian_pyramid function; I think adding blur only generate less accurate results. Would you please provide some explanation?

lmk

lmk_path = imagepath.replace('images', 'kpt').replace('.png', '.npy').replace('.jpg', '.npy') is kpt7.npy ?
lmk_path_dense = imagepath.replace('images', 'kpt_dense').replace('.png', '.npy').replace('.jpg', '.npy') is kpt68.npy?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.