Git Product home page Git Product logo

multi-domain-learning-fas's People

Contributors

chelsea234 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

multi-domain-learning-fas's Issues

How to extract images from videos?

Hi,

thank you very much for your work and for releasing the code :) I have read your code carefully but still have some doubts. How do you prepare datasets? And how can I get frames from videos?

Looking forward to your reply....

construct the MD-FAS benchmark

When I construct the FASMD dataset based on the list (SIW E train and OULU E train) you provided, I found that I got the E sub-dataset with 839(from siw)+1620(from oulu) videos, which is a lot more than the 1696 listed in the paper?

Some questions about inference

Hi, in the inference.py file, I see that you load all of models in a list (model_list, model_p_list, model_d_list), but I couldn't see anywhere all of lists can be used in the inference.py file. Can you help me to clear this point to me, please?

the requirements file is not clear

Hi
Please add and a requirements file to define version of the required library for inferencing.
when I run the inference file I faced this error:

face-alignment is not defined

Help with downloading the dataset SIW

Hello,
I sent the request form and the signed DRA to ask for the download link of the SiW dataset a month ago but I haven't received any reply so far. I also sent you an email a few days ago asking for help. If possible, can you check my email and help me with downloading the dataset.
Thank you very much.

How to use recon

Hi, I don't understand what is the purpose of recon of the liveness image, is it possible or can it be used to obtain kind of a numerical output that states whether an image is live or spoof?

Encounter "ZeroDivisionError: float division by zero" when running source_SiW_Mv2/preprocessing.py

Hello, first thank you for this great work. Below is the error I encounter when running source_SiW_Mv2/preprocessing.py,
image
It seems that the "eye2eye_dis" of the current frame equals to zero, which also makes "xr-xl" equal to zero. The x_scale then cannot be compute (since it will be infinite). May I ask is there the same problem happends when you run this code? I wonder if it is ok to ignore the frame directly. Hope to here from you soon. Thank you very much.

running inferece.py file gives an error

Hi @CHELSEA234

When I run the inference.py file using this command:

python3 inference.py --cuda=0 --pro=1 --dir=./demo/live/ --overwrite --weight_dir=./resources/save_model_siwmv2_pro_1_unknown_Ob

but it gives an error :

File "inference_ed.py", line 194, in test_step
    img, img_name = dataset_inference.nextit()
  File "project/Multi-domain-learning-FAS/source_SiW_Mv2/dataset.py", line 121, in nextit
    return next(self.feed)
  File "torch_env/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 816, in __next__
    raise StopIteration
StopIteration

How can I solve this issue?

About the metrics

In the Paper, you use TPR@FPR=0.5% as metrics but this metric is not included in metric.py (But I find the TPR@FPR=0.2%, 5%, and 1%). And I am curious about the test_architecture.py because it can not work well for the function test_update not being used.

How to get live/spoof?

hi, I see the model outputs depth, region, content, and additive traces, how is it possible to transform this to live/spoof to replicate your paper?

about data preprocessing

Thanks for sharing the code. I'm new to FAS and would be very grateful if you could give me some data preprocessing details.

  1. According to the paper, the FAS model and framework are frame-based, right? In preprocessing, is it necessary to extract frames from the video? Then crop out the face from each frame? How many frames are extract from each video?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.