Git Product home page Git Product logo

ssan's People

Contributors

wangzhuo2019 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ssan's Issues

How to prepare the dataset?

Nice work!!!
I have read your code carefully, but still have some doubts. How do you prepare datasets(OULU,CAISA,CFSD,REPLAY)? Can I put the video files in the folder? And can you provide the 'train_list_video.txt'?

Looking forward to your reply....

Evaluation on cross-domain intra-type

Hi, thanks for the great work!
Just want to make sure the evaludation setting. The four mentioned datasets have their respective train/test split.
For the evaludation (Leave-1-out) in your case, do you use the train split of three sets for training and use the train split of the remaining one for testing? or do you use the test split of the remaining one for testing?
Thanks

Validation and test set seems to be the same ?

Dear Authors
Thanks for the great work.

I was going through your code to understand how you have used each of the 4 datasets for cross dataset training and testing.

Observations:

  1. As per the code in the data_merge file, you are loading the target dataset as your test set (which seems fine).

  2. However, you seem to be using the target dataset as your validation set to compute the HTER and AUC scores after every epoch, and choose the epoch with the best score on the target dataset. training file for reference. Example: Lets choose OCIM protocol. For every epoch, you seem to be training on OCI and testing on M and choosing the epoch with the best score on M.

Questions:

  • Is this approach valid ? Shouldn't there be a validation set combining O-C-I, that is used to evaluate during training and finally the chosen model is tested on M to compute the HTER and AUC scores ?

Kindly request you to clarify the same.

Thanks

The implement of GRL maybe not correct?

The implement of GRL maybe not correct?

class GRL(nn.Module):

The backword function is need to inherit torch.autograd.Function like this example in pytorch as follow:

https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function

A simple implement might be:

from torch.autograd import Function

class GRL(Function):

    @staticmethod
    def forward(ctx: Any, input: torch.Tensor):
        return input * 1.0

    @staticmethod
    def backward(ctx: Any, gradOutput: torch.Tensor):
        return gradOutput.neg()

contrast_loss issue

Is it normal for the contrast loss to be negative?even,the contrast loss can be reduced

SCL

"To make a comparison between them, the experiment of w/ SCL is conducted by implementing contrastive learning on selfassembly
directly."
这个实验具体是想证明什么,能详细说一下吗?谢谢

about the data preprocess of oulu

hi, according to the code and papers, I have a question for OULUNPU dataset in preprocess, it seems that crop_face_from_scene function only works on full image without cropping face in image?

https://github.com/wangzhuo2019/SSAN/blob/main/datasets/Load_OULUNPU_train.py#L14

def crop_face_from_scene(image, scale):
    y1,x1,w,h = 0, 0, image.shape[1], image.shape[0]
    y2=y1+w
    x2=x1+h
    y_mid=(y1+y2)/2.0
    x_mid=(x1+x2)/2.0
    w_scale=scale/1.5*w
    h_scale=scale/1.5*h
    h_img, w_img = image.shape[0], image.shape[1]
    y1=y_mid-w_scale/2.0
    x1=x_mid-h_scale/2.0
    y2=y_mid+w_scale/2.0
    x2=x_mid+h_scale/2.0
    y1=max(math.floor(y1),0)
    x1=max(math.floor(x1),0)
    y2=min(math.floor(y2),w_img)
    x2=min(math.floor(x2),h_img)
    region=image[x1:x2,y1:y2]
    return region

Request for training and testing logs

Thank you for open-sourcing the code.
It would be great if you could share the training and testing logs so that we can compare our implementation with yours.

Could you share the pre-trained weight that necessary for successfully running the inference process of your project?

Thank you for open-sourcing this wonderful work. We are researchers from different fields and would like to leverage your work for our own works. However, we are not well-versed in FAS, so it would be quite challenging for us to obtain the datasets and train the model from scratch.

We are not interested in obtaining all the pre-trained weights in the paper. Our goal is simply to acquire the specific weight necessary for successfully running the inference process of your project.

Could you share the pre-trained weight that necessary for successfully running the inference process of your project?

Thank you very much.

大佬,收敛后的Loss有个疑问

epoch:305, mini-batch:1800, lr=0.0001, binary_loss=0.0015, constra_loss=-0.9984, adv_loss=0.0002, Loss=-0.9967
大佬,目前模型loss基本已经收敛,adv_loss为啥接近0呢?我的理解:接近0说明域判别器能够对content feature进行域标签分类,而我们期待的是提取域无关的特征迷糊判别器,adv_loss不应该为0。不知道哪里出了问题。请大佬指教,非常感谢

Datasets sharing

Hi,
Thank for sharing your great work!
Do you have the datasets that authors used in their paper. Particularly OULU-NPU and Idiap Replay-Attack datasets, since the owners of these datasets may no long maintain their website. It would be nice if you can share them!

Thanks

I guess there has some mistakes in the pub_mod ResnetAdaINBlock's code

a655d024fd17a6ae89ee59907bf5c86
I think the forward function has some mistakes, the input of norm1 relu1 conv2 norm2 layer should be the variable out rather than x, do you note this point in your experiment?
我认为前向传播函数里有一些错误,norm1 relu1 conv2 norm2层的输入应该是变量out而不是x,你在实验中注意到这一点了吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.