Git Product home page Git Product logo

emotion-fan's People

Contributors

open-debin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

emotion-fan's Issues

Inference and visualization script

Hi,

I am wondering if there is a visualization script to get the same output images in your readme part. If so, would you mind to releasing this script or sending it via email?

Thanks.

Example Usage

Hi,

Do you have code to show an example on how to use this model to detect on a short video or a set of images?

I've tried looking at the code, but I'm a little confused about how I use model with AT_level to generating a softmax that indicates emotion

Questions about validation process

Hi, I found that you guys split the video into 3 segments in training set to train. However, in validation set, you guys only choose the first image and validate on it. Just wondering why don't split the video into 3 segments in validation set and validate it. Thank you!

frame2face_afew.py遇到的问题

作者您好,我按照您的命令,先执行python video2frame_afew.py提取出了视频帧,但执行python frame2face_afew.py只在face目录下生成了所有空文件夹,并没有对齐后的人脸,我并没有修改任何代码。请问是什么原因?

你好:

          你好:

谢谢回复!我已下载并能够正常加载预训练模型~

祝好
---- 回复的原邮件 ----
| 发件人 | @.> |
| 发送日期 | 2023年08月17日 10:55 |
| 收件人 | @.
> |
| 抄送人 | GaoJieXue @.>,
Comment @.
> |
| 主题 | Re: [Open-Debin/Emotion-FAN] 预训练模型加载问题 (Issue #42) |

你好,请问可以分享一下你下载的预训练模型吗?我通过百度网盘和OneDrive下载后,准备解压时都显示文件损坏无法解压,不知道是什么原因。

@.*** 我会回复你


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

Originally posted by @GaoJieXue in #42 (comment)

FileNotFoundError: [Errno 2]

FileNotFoundError: [Errno 2] No such file or directory: './Data/Train/Fear/013736360.avi' ? I noticed there's no folder named Train in Data folder

AFEW trainset acc calculate

hallo , i read you code then i am confuse about the acc calculate procee in your code,there is some question:
it is use a matrix shape (380, number of images)--m1,to multiple the (number of images , 7 )--m2 get the (380,7) --m3and seems think it's the video result of the module predict. i check the m1,this a index metrix, it's seems means images belong the video index in the videolist. it's True/False. i still confuse why need to multiply, and what is mean.
Tanks very much

ck

Hello, I used the demo training script to train on ck. The training set was divided into 10, 9 training, and 1 test. The accuracy rate was only 92.5%. (The learning rate is set according to the thesis.) Is it a problem with the demo training script?(I removed the contempt data)

Could you provide the final model for AFEW?

Hello , Thanks for your excellent work! Could you share the final trained model for AFEW8.0, that we email to the [email protected] and other posters that you mentioned in the issue, but do not get reply.
We only want to use your final model to produce a demo for each video. And we will give the demo to you if we finish. Could you help us?
Thanks so much!

NameError: name 'logger' is not defined

CUDA_VISIBLE_DEVICES=1 python baseline_ck_plus.py --fold 10
1.2.0
22-Apr-2021-10-17-35 --- args ---
baseline ck+ dataset, learning rate: 0.1
Traceback (most recent call last):
File "baseline_ck_plus.py", line 155, in
main()
File "baseline_ck_plus.py", line 45, in main
train(train_loader, model, optimizer, epoch)
File "baseline_ck_plus.py", line 91, in train
logger.print('Epoch: [{:3d}][{:3d}/{:3d}]\t'
NameError: name 'logger' is not defined
When I look at line 22: logger = util.Logger('./log/','baseline_ckplus')
What's this ?

Pre-trained model provides random result

I downloaded AFEW and extracted the frames like below.

/Data/Val/Angry/000149120/out-001.jpg out-002.jpg.... out-069.jpg

But the CUDA_VISIBLE_DEVICE=0 Python3 Demo_AFEW_Attention.py -e
shows dramatically different result in each run.
The AFEW file should be preprocessed other way or any version-mismatch is suspected?

CK_face file not being generated

Hi

first of all thanks for the code! I am running the code step by step.
Upon running frame2face_ckplus.py file, no error comes up but it displays all is over indicating that the file has executed properly, however when I run the experiments the file is not found, because in fact it is not created and listed in the folder after running face alignments.
I also checked that the right data set is found in the directory frame as per attached

Could you kindly advice

Thanks for your help

directory dataset

face alignment问题

作者您好,我在运行python frame2face_ckplus.py后,生成的data/face/ck_face里面S005-S999所有的文件夹都是空的,导致后续运行baseline的代码时加载data失败,不知道是什么情况您能解答一下吗

about self-attention and relation-attention

in your code Demo_AFEW_Attention.py, seems the self-attention and relation-attention can not be used simultaneously?

at_type = ['self-attention', 'relation-attention'][args.at_type]
print('The attention is ' + at_type)

This seems different from your paper:
image

If so, why?

预训练模型加载问题

您好!关于CK+数据集的指标,我有几个问题向您请教:
1.我下载了预训练模型,加载FER+模型时,单纯val的正确率很低,只有不到20%。这是为什么?
2.同时,我在FER+预训练模型上运行了您的train代码,最终的val的正确率只有90.90%,并没有达到论文中的99.69%,请问我在训练过程中需要注意什么吗?
3.我试图加载MS1M预训练模型,但是加载过程中报错:RuntimeError: Error(s) in loading state_dict for ResNet_AT:
Unexpected key(s) in state_dict: "feature.weight", "feature.bias"。
请问我改如何初始化模型?

你好:我遇到了相同的问题,请问可以分享以下预训练模型吗。非常感谢!

          你好:

谢谢回复!我已下载并能够正常加载预训练模型~

祝好
---- 回复的原邮件 ----
| 发件人 | @.> |
| 发送日期 | 2023年08月17日 10:55 |
| 收件人 | @.
> |
| 抄送人 | GaoJieXue @.>,
Comment @.
> |
| 主题 | Re: [Open-Debin/Emotion-FAN] 预训练模型加载问题 (Issue #42) |

你好,请问可以分享一下你下载的预训练模型吗?我通过百度网盘和OneDrive下载后,准备解压时都显示文件损坏无法解压,不知道是什么原因。

@.*** 我会回复你


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

Originally posted by @GaoJieXue in #42 (comment)

Size of tensors doesn't match in CK+

When I run the model on CK+, I get the following error

Epoch: [0][0/13] Time 13.575 (13.575) Data 0.000 (0.000) Loss 1.9647 (1.9647) Prec@1 10.417 (10.417) *Prec@Video 15.152 *Prec@Frame 12.179 Traceback (most recent call last): File "Demo_AFEW_Attention.py", line 245, in <module> main() File "Demo_AFEW_Attention.py", line 90, in main prec1 = validate(val_loader, model) File "Demo_AFEW_Attention.py", line 233, in validate pred_score = model(vectors=output_store_fc, vm=weightmean_sourcefc, alphas_from1=output_alpha, File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/renjie/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/renjie/code/Emotion-FAN/Code/Model.py", line 226, in forward vs_cate = torch.cat([vectors, vms], dim=1) RuntimeError: Sizes of tensors must match except in dimension 0. Got 672 and 224 (The offending index is 0)

Is there any preprocessing needed before train on CK+?

input format error

Hi, I changed the input to two classes, angry or not to run your network. However, I encounter:TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not str.
And the error is here:
File "Demo_AFEW_Attention.py", line 181, in validate
f, alphas = model(input_var, phrase = 'eval')
File "/Code/Model.py", line 206, in forward
f = self.conv1(x)
Could you please explain why? Thank you. Besides that, why you partition the input into three parts?

模型加载问题

作者您好,我用您提供的Resnet18_FER+_pytorch.pth.tar模型进行测试,出现如下报错RuntimeError: Error(s) in loading state_dict for ResNet_AT:
Missing key(s) in state_dict: "alpha.0.bias", "alpha.0.weight", "beta.0.bias", "beta.0.weight", "pred_fc1.bias", "pred_fc1.weight", "pred_fc2.bias", "pred_fc2.weight".
Unexpected key(s) in state_dict: "fc.weight", "fc.bias".
是因为我使用的网络不对吗?

learning rate setting

Hello, I have observed that the learning rate settings for different training sets are very different. How do I set the learning rate and other parameters for other training sets?

Error(s) in loading state_dict for ResNet_AT

Hello! When I try to instantiate the trained model this way:

atType = ['self-attention', 'self_relation-attention'][0]    
model = networks.resnet18_at(at_type=atType)        
model.load_state_dict(torch.load('trained_model_debian.pth', map_location=get_default_device()))

I keep getting this error: what do I do??

`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
23 model = networks.resnet18_at(at_type=atType)
24 softmax = torch.nn.Softmax(dim=1)
---> 25 model.load_state_dict(torch.load('trained_model_debian.pt', map_location=get_default_device()))
26
27 def predict(x):

~/anaconda3/envs/emotion_fan/lib/python3.9/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1221
1222 if len(error_msgs) > 0:
-> 1223 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
1224 self.class.name, "\n\t".join(error_msgs)))
1225 return _IncompatibleKeys(missing_keys, unexpected_keys)

RuntimeError: Error(s) in loading state_dict for ResNet_AT:
Missing key(s) in state_dict: "conv1.weight", "bn1.weight", "bn1.bias", "bn1.running_mean", "bn1.running_var", "layer1.0.conv1.weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.1.conv1.weight", "layer1.1.bn1.weight", "layer1.1.bn1.bias", "layer1.1.bn1.running_mean", "layer1.1.bn1.running_var", "layer1.1.conv2.weight", "layer1.1.bn2.weight", "layer1.1.bn2.bias", "layer1.1.bn2.running_mean", "layer1.1.bn2.running_var", "layer2.0.conv1.weight", "layer2.0.bn1.weight", "layer2.0.bn1.bias", "layer2.0.bn1.running_mean", "layer2.0.bn1.running_var", "layer2.0.conv2.weight", "layer2.0.bn2.weight", "layer2.0.bn2.bias", "layer2.0.bn2.running_mean", "layer2.0.bn2.running_var", "layer2.0.downsample.0.weight", "layer2.0.downsample.1.weight", "layer2.0.downsample.1.bias", "layer2.0.downsample.1.running_mean", "layer2.0.downsample.1.running_var", "layer2.1.conv1.weight", "layer2.1.bn1.weight", "layer2.1.bn1.bias", "layer2.1.bn1.running_mean", "layer2.1.bn1.running_var", "layer2.1.conv2.weight", "layer2.1.bn2.weight", "layer2.1.bn2.bias", "layer2.1.bn2.running_mean", "layer2.1.bn2.running_var", "layer3.0.conv1.weight", "layer3.0.bn1.weight", "layer3.0.bn1.bias", "layer3.0.bn1.running_mean", "layer3.0.bn1.running_var", "layer3.0.conv2.weight", "layer3.0.bn2.weight", "layer3.0.bn2.bias", "layer3.0.bn2.running_mean", "layer3.0.bn2.running_var", "layer3.0.downsample.0.weight", "layer3.0.downsample.1.weight", "layer3.0.downsample.1.bias", "layer3.0.downsample.1.running_mean", "layer3.0.downsample.1.running_var", "layer3.1.conv1.weight", "layer3.1.bn1.weight", "layer3.1.bn1.bias", "layer3.1.bn1.running_mean", "layer3.1.bn1.running_var", "layer3.1.conv2.weight", "layer3.1.bn2.weight", "layer3.1.bn2.bias", "layer3.1.bn2.running_mean", "layer3.1.bn2.running_var", "layer4.0.conv1.weight", "layer4.0.bn1.weight", "layer4.0.bn1.bias", "layer4.0.bn1.running_mean", "layer4.0.bn1.running_var", "layer4.0.conv2.weight", "layer4.0.bn2.weight", "layer4.0.bn2.bias", "layer4.0.bn2.running_mean", "layer4.0.bn2.running_var", "layer4.0.downsample.0.weight", "layer4.0.downsample.1.weight", "layer4.0.downsample.1.bias", "layer4.0.downsample.1.running_mean", "layer4.0.downsample.1.running_var", "layer4.1.conv1.weight", "layer4.1.bn1.weight", "layer4.1.bn1.bias", "layer4.1.bn1.running_mean", "layer4.1.bn1.running_var", "layer4.1.conv2.weight", "layer4.1.bn2.weight", "layer4.1.bn2.bias", "layer4.1.bn2.running_mean", "layer4.1.bn2.running_var", "alpha.0.weight", "alpha.0.bias", "beta.0.weight", "beta.0.bias", "pred_fc1.weight", "pred_fc1.bias", "pred_fc2.weight", "pred_fc2.bias".
Unexpected key(s) in state_dict: "module.conv1.weight", "module.bn1.weight", "module.bn1.bias", "module.bn1.running_mean", "module.bn1.running_var", "module.bn1.num_batches_tracked", "module.layer1.0.conv1.weight", "module.layer1.0.bn1.weight", "module.layer1.0.bn1.bias", "module.layer1.0.bn1.running_mean", "module.layer1.0.bn1.running_var", "module.layer1.0.bn1.num_batches_tracked", "module.layer1.0.conv2.weight", "module.layer1.0.bn2.weight", "module.layer1.0.bn2.bias", "module.layer1.0.bn2.running_mean", "module.layer1.0.bn2.running_var", "module.layer1.0.bn2.num_batches_tracked", "module.layer1.1.conv1.weight", "module.layer1.1.bn1.weight", "module.layer1.1.bn1.bias", "module.layer1.1.bn1.running_mean", "module.layer1.1.bn1.running_var", "module.layer1.1.bn1.num_batches_tracked", "module.layer1.1.conv2.weight", "module.layer1.1.bn2.weight", "module.layer1.1.bn2.bias", "module.layer1.1.bn2.running_mean", "module.layer1.1.bn2.running_var", "module.layer1.1.bn2.num_batches_tracked", "module.layer2.0.conv1.weight", "module.layer2.0.bn1.weight", "module.layer2.0.bn1.bias", "module.layer2.0.bn1.running_mean", "module.layer2.0.bn1.running_var", "module.layer2.0.bn1.num_batches_tracked", "module.layer2.0.conv2.weight", "module.layer2.0.bn2.weight", "module.layer2.0.bn2.bias", "module.layer2.0.bn2.running_mean", "module.layer2.0.bn2.running_var", "module.layer2.0.bn2.num_batches_tracked", "module.layer2.0.downsample.0.weight", "module.layer2.0.downsample.1.weight", "module.layer2.0.downsample.1.bias", "module.layer2.0.downsample.1.running_mean", "module.layer2.0.downsample.1.running_var", "module.layer2.0.downsample.1.num_batches_tracked", "module.layer2.1.conv1.weight", "module.layer2.1.bn1.weight", "module.layer2.1.bn1.bias", "module.layer2.1.bn1.running_mean", "module.layer2.1.bn1.running_var", "module.layer2.1.bn1.num_batches_tracked", "module.layer2.1.conv2.weight", "module.layer2.1.bn2.weight", "module.layer2.1.bn2.bias", "module.layer2.1.bn2.running_mean", "module.layer2.1.bn2.running_var", "module.layer2.1.bn2.num_batches_tracked", "module.layer3.0.conv1.weight", "module.layer3.0.bn1.weight", "module.layer3.0.bn1.bias", "module.layer3.0.bn1.running_mean", "module.layer3.0.bn1.running_var", "module.layer3.0.bn1.num_batches_tracked", "module.layer3.0.conv2.weight", "module.layer3.0.bn2.weight", "module.layer3.0.bn2.bias", "module.layer3.0.bn2.running_mean", "module.layer3.0.bn2.running_var", "module.layer3.0.bn2.num_batches_tracked", "module.layer3.0.downsample.0.weight", "module.layer3.0.downsample.1.weight", "module.layer3.0.downsample.1.bias", "module.layer3.0.downsample.1.running_mean", "module.layer3.0.downsample.1.running_var", "module.layer3.0.downsample.1.num_batches_tracked", "module.layer3.1.conv1.weight", "module.layer3.1.bn1.weight", "module.layer3.1.bn1.bias", "module.layer3.1.bn1.running_mean", "module.layer3.1.bn1.running_var", "module.layer3.1.bn1.num_batches_tracked", "module.layer3.1.conv2.weight", "module.layer3.1.bn2.weight", "module.layer3.1.bn2.bias", "module.layer3.1.bn2.running_mean", "module.layer3.1.bn2.running_var", "module.layer3.1.bn2.num_batches_tracked", "module.layer4.0.conv1.weight", "module.layer4.0.bn1.weight", "module.layer4.0.bn1.bias", "module.layer4.0.bn1.running_mean", "module.layer4.0.bn1.running_var", "module.layer4.0.bn1.num_batches_tracked", "module.layer4.0.conv2.weight", "module.layer4.0.bn2.weight", "module.layer4.0.bn2.bias", "module.layer4.0.bn2.running_mean", "module.layer4.0.bn2.running_var", "module.layer4.0.bn2.num_batches_tracked", "module.layer4.0.downsample.0.weight", "module.layer4.0.downsample.1.weight", "module.layer4.0.downsample.1.bias", "module.layer4.0.downsample.1.running_mean", "module.layer4.0.downsample.1.running_var", "module.layer4.0.downsample.1.num_batches_tracked", "module.layer4.1.conv1.weight", "module.layer4.1.bn1.weight", "module.layer4.1.bn1.bias", "module.layer4.1.bn1.running_mean", "module.layer4.1.bn1.running_var", "module.layer4.1.bn1.num_batches_tracked", "module.layer4.1.conv2.weight", "module.layer4.1.bn2.weight", "module.layer4.1.bn2.bias", "module.layer4.1.bn2.running_mean", "module.layer4.1.bn2.running_var", "module.layer4.1.bn2.num_batches_tracked", "module.fc.weight", "module.fc.bias". `

Validation accuaracy

Hi,
I am running the code with the frames cropped around the faces as stated in the paper without changing anything in the code. I am only achieving accuracies around 30%. Is this the same network you used to reach a 51% accuracy?
Thank you

Can't extract pre-train model

Hi, I just follow the instruction to download two pre-train models from Onedrive. However, I can't extract it. And the link from Baidu doesn't exist. Could you reupload the pre-train models again?

more than 1 gpu error

when i use more than 1 gpu,i will meet the problem.
File "Demo_AFEW_Attention.py", line 181, in validate
f, alphas = model(input_var, phrase = 'eval')
File "/Code/Model.py", line 206, in forward
f = self.conv1(x)
how to resolve it,please

the consistent of label of the datasets

Hello, if I train on the new dataset with the model you provided on Baidu, do I need to change the label? Because the new dataset is not consistent with the type of expression corresponding to the label of the dataset you trained before.
If I just fine-tune it, is it better to keep the consistency of the label.

AFEW8.0的数据集

您好,可以发一份数据集吗?在官网不能下载啊,给个百度云链接,可以吗,谢谢

I hope you to provide the final model for AFEW

Hello , Thanks for sharing updates on Emotion-FAN! I am studying FER and I was impressed by your paper.

I am trying to make demo system using your model, but failed to find the final trained model for AFEW8.0

I just found the weights of Resnet18_FER+_pytorch.pth.tar and Resnet18_MS1M_pytorch.pth.tar which seems baseline trained model. I want to use your final model to produce a demo for my test. can you provide it?
Thanks so much!

Required Preprocessing?

Could you please elaborate on the pre-processing used for the AFEW dataset? I am currently training on the aligned faces originally provided and only getting 40% val precision (after 60 epochs. It slows considerably after that). I am assuming that there is something wrong with pre-processing.

Also, could you provide the weights of the final trained model?

"AFEW dataset face alignment" and "accuracy on validation" problem.

Hello. I use the ./data/face_alignment_code/frame2face_afew.py to process the frame files from AFEW videos.
However, I found some face can't be detected, which causes that some aligned face folder was empty.

I want to know about whether you just use the ./data/face_alignment_code/frame2face_afew.py to process or not? Or do you have another ways?

What'more, I got the overfitting error after just several epochs, and the accuracy on validation is just around 41 (without the relation-attention) or 16 (with the relation-attention).

I use the code without any changing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.