Git Product home page Git Product logo

celeba-spoof's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

celeba-spoof's Issues

各攻击类型样本数量与文中不一致

首先感谢您的开源,我这边根据提供的json文件统计了一下各攻击类型的样本量,总体数量为37.7w张左右与文中描述的44w左右存在较大差距,所以想问一下是数据没有全部开源吗?还是属性标签的json文件不全面?统计了一下总的id数也没有10177个。

Where to find the meaning of labels?

Hi, thank you for sharing such a great dataset.
However, I can't find the annotation mapping of label values.
I read the README file and know the following information

  1. the first 40 values are about face attributes
  2. the last 3 values are about spoof type, illumination, and live/spoof

But I find there are 44 values in label json and it don't match the number described in README.
Besides, I am not sure whether the order of the first 40 face attributes is the same as CelebA Dataset.

One of key-value pair is shown as follows. Could you provide me the mapping of each value?

"Data/train/4980/spoof/000003.jpg": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 1, 2, 1]

Thanks for your help!

摄像头测试结果全为真人

您好,我用您开源的模型在笔记本摄像头下测试,打印的黑白人脸的分类结果全为真人,我用的retinaface作为人脸检测,裁出的人脸转换成RGB,然后resize成(224,224),最后ToTensor送到网络里,请问我是不是哪些地方做的不对 @davidzhangyuanhan

metadata in json file doesn't match the description in README

Hi, one of the annotation in test_label.json file is listed below, which doesn't match with the description in README
Data/train/7870/spoof/000097.jpg [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 4, 1, 1] 44 --> length of the list
README:
image
what's the last four labels in the list?

Thanks

关于比赛方法

hi,我看比赛也结束了几天了,前面几位的分数非常高,因为比较关心他们的解决办法,请问这个比赛还有后续吗?会公布各位参赛者的方案吗?

Question about validation data?

I have read paper about CelebA-Spoof and I see this dataset split into train, val, test with ratio 8:1:1. But I just see train and test with ratio 8:1. So question is "Where is validation data?". Thank you so much.

A question about the crop area for AENET

Hello david,

I wanted to ask about what should be the crop area be like as aenet input? Should be the direct output of face detector? Or are we supposed to crop to include whole hair, neck area etc?
Because I observed there are face attributes in celeba such as "wearing necklace, wearing necktie, gray hair" Which cannot be seen if I crop only the face area.

What is the differences between face region mask (in paper cut) and 3d mask?

Hi,

Most of the file names given in the json files are not found in the dataset. Around 161k file names are in the json files but the rest do not exist. Could you help me about this issue?

Secondly, what is the differences between region mask and 3d mask. Can we accept face region mask as 3d attack? (Just I want to learn your opinion)

Thank you.

关于比赛方法公布

hi,我看比赛也结束了几天了,前面几位的分数非常高,因为比较关心他们的解决办法,请问这个比赛还有后续吗?会公布各位参赛者的方案吗?大概什么时候呢?

Dataset quality

Hello,

I'd like to thank you for the amazing work you have done. This said, we have some issues with your dataset:
1/ We have a tool to check images at pixel level to see if it's compressed more than once or recaptured. Many of the images you have shared as live are clearly not. Some of the images you even don't need a tool to notice it.
2/ CelebA dataset was obtained using web scraping, there is no scientific basis to assert that these images are live.
3/ Most of the spoof images are heavily distorted, the faces have exaggerated aspect ratio. This make it very easy to find them.
4/ A good number of the live images are heavily edited (resize, histogram change...). I understand this is not about image forensic but I don't think these images should be part of the dataset.
5/ ...

We don't use your dataset in our product, we were just curious to check it, have an online tester (https://www.doubango.org/webapps/face-liveness/) and a SDK in production. We have cleaned up your dataset and can share it here if you give us the authorization.

Question about the liveness data in CelebA_Spoof dataset.

As the live face data in the original CelebA dataset is mostly crawled from the website which inevitably contains many computer-enhanced pictures(ps picture). And those faces don't look very real compared to the picture being shot by camera. Visually those faces looks more like spoof face than real face.
So I doubt whether it is proper to treat that unreal face as live in your dataset.
The pictures below is from live data which is more like spoof faces visually.
505815

552109

508796

497919

Augmentations params

Hi!

Can you, please, share augmentation parameters? According to the paper color distortion was used. Do I understand correctly that brightness of the image was not changed? Which color distortion method was used?

关于predict输出

请问predict输出的两个概率哪个是real概率,哪个是spoof概率,谢谢。

Model output

Hello, can you please provide a description for model output. I am trying to test with singe image, so it is kinda difficult.

Also, I saw that output shape is changing, can you please elaborate why and how.

Thank you for the great work!

the length of attributes?

The paper said the length of attributes is 43.
but in the datasets, the length of attributes in label.json is 44.
image

Can you share the training script?

Hi, Thans a lot for sharing your work. really appreciate it.
Can you also share the training script that was used to achieve your results?
Its much appreciated.

标签很多都是错的

很多标签都错了,A4 ,PC ,Phone,poster, photo都乱了,你们没有发现么???或者怎么处理的?

Depth maps

Can you share the depth map for the CelebA-spoof dataset? We have been trying to calculate depth map using PRNet but it is extremely slow and taking a lot of time. It would be a great help if you can share the depth maps or if you have faster code to calculate the depth maps from the images.
Thanks

Model generalization

Hi David,

Thanks for the efforts, and release of the code/model. I have been testing it with different sensors, and it seems that does not generalize well for consumer laptop cameras and other more sophisticated UVC cameras.

Any thoughts you could share how I can debugging the performance it would be really great!.

Best,
/M

How to extract files

Hi! Sorry, maybe it's a dumb question, but i have a trouble

After downloading files from google drive, I am running command cat CelebA_Spoof.zip.0* > CelebA_Spoof.zip Then I'm trying to unzip this file, but I get next error:
image
What am I doing wrong? What command should I execute, to get correct .zip file?

Annotation clarification

some bbox annotation contains negative values. for instance, 10, -5, 200, 170, .997. Could you please clarify why negative values in a box coordinates?

ground truth of geomatric(depth and reflection)

  1. what's is the ground truth of geomatric part(depth and relection), I didn't find any information about these two?
  2. the inference of AENet was released, when will the training methods be also released? or how do you design the loss function while training?

Can you share training hyperparameters?

I am trying to reimplement your paper and I'm having problems with convergence. In the paper is said you used lr=5e-3 and SGD optimizer for 50 epochs. Did you change the learning rate after 50 epochs?
How many epochs did it take to converge?
Which Batch size did u use?
I am training with semantic features and live/spoof classification (C,S - without geometric). I am using loss weights: 1.0 for classification task, lambda_f = 1.0 for Sf, lambda_s = 0.1 for Ss, lambda_i = 0.01 for Si.

Thanks for the paper contribution.

Prediction

How to predict an image real or fake?

Missing Attributes

@davidzhangyuanhan

Abstract: CelebA-Spoof is a large-scale face anti-spoofing dataset that has 625,537 images from 10,177 subjects, which includes 43 rich attributes on face, illumination,environment and spoof types. Live image selected from the CelebA dataset.

But when I looked at all the JSON files in metas I found the following:

  • There are attributes for 531511 images (protocol1 - train, test + protocol2 - train, test)
  • There are attributes for 561575 images (intra_test - train, test)
  • There are attributes for 561575 images (protocol1 - train, test + protocol2 - train, test + intra_test - train, test)

So essentially out of 625,537 images, only 561,575 images have those 43 attributes i.e. 63,962 images with missing attributes

Can you tell me if Im missing something here or these many images actually do not have attributes?

Decode anti-spoof picture attribute

Hi,
in json files, The picture attributes are encoded as categorical number.
Can you please report a mapping for Spoof attributes (environment, illumination and spoof type)???

APCER/BPCER definitions in paper

Hi!

I've read appendix of the paper and experiments part of OULU-NPU, which is referenced in paper, however, in my understanding definitions of APCER in CelebA-Spoof and OULU-NPU are different (and that seems strange). Can you please clarify what exactly do you mean by APCER and BPCER?

In OULU-NPU:

Screenshot from 2020-12-22 13-54-41

i. e. this maximum error across different attacks.

In CelebA-Spoof:

Screenshot from 2020-12-22 13-56-53

Screenshot from 2020-12-22 13-57-51

which seems to be simply FAR and FRR.

Thanks.

Unable to find test_label.json

Hi,
I have downloaded this repo and i tried to run but i am unable to run because i can not able to find test_label.json file in this repo.
can you please help me find test_label.json file.
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.