Git Product home page Git Product logo

au_r-cnn's Introduction

au_r-cnn's People

Contributors

dependabot[bot] avatar machanic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

au_r-cnn's Issues

where could I find the mean_rgb.npy file

Hello,

  In the case that I do not have dataset BP4D, is it possible for me to have this file 'mean_rgb.npy'?Could I run the demo without the support of dataset BP4D or dataset DISFA? I just use the pretrained model.

Thanks!

Acc

Hi
Do you evaluate the accuracy?

Data Partition List

First of all, thanks for sharing your code!!
I have a question about the data list you provided. For BP4D, is the subjects partition same as DRML which detailed here? You are not using the data list provided here, right? Since the later one seems to be unbalanced for different folds.
BTW, according to the results from your paper and this one, pre-trained VGG and ResNet can actually achieve ~60 F1 on [BP4D, same partition, frame based detections], which is consistent with my experiments. Originally I thought there were some issues with my implementations since those baselines seem to be as good as some latest conference papers.

the package cannot be installed

I installed all the required python packages.
Then I tried to install AU R-CNN by 'python setup.py install' under the folder, it reported the following error

cpdef double get_value(self, int y1, int y2, np.ndarray[DTYPE_float_t, ndim=1] weight) except? -1:
Invalid type 'DTYPE_float_t' at graph_learning/model/open_crf/cython/factor_graph.pyx:19:73

I did all my thing under Unbuntu 16.04 Thank you in advance!

pickled data error

Hi @machanic
I really like this code and hope to have its Pytorch version.
Actually, I couldn't run it yet! I changed the addresses in config.py but I have errors on the needs!
like this one that drove me crazy :))

  File "D:\XAI\Code\AU_R-CNN-master\AU_rcnn\train.py", line 459, in <module>
    main()

  File "D:\XAI\Code\AU_R-CNN-master\AU_rcnn\train.py", line 202, in main
    extract_len=args.extract_len)  # 可改为/home/nco/face_expr/result/snapshot_model.npz

  File "D:/XAI/Code/AU_R-CNN-master\AU_rcnn\links\model\faster_rcnn\faster_rcnn_resnet101.py", line 119, in __init__
    mean_array = np.load(mean_file)

  File "C:\ProgramData\Anaconda3\envs\R-CNN\lib\site-packages\numpy\lib\npyio.py", line 444, in load
    raise ValueError("Cannot load file containing pickled data "

ValueError: Cannot load file containing pickled data when allow_pickle=False

Could you please guide me?

thanks

cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_EXECUTION_FAILED

Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
File "./AU_rcnn/train.py", line 458, in
main()
File "./AU_rcnn/train.py", line 449, in main
trainer.run()
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/training/trainer.py", line 349, in run
six.reraise(*exc_info)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/training/updaters/standard_updater.py", line 187, in update_core
optimizer.update(loss_func, *in_arrays)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/optimizer.py", line 864, in update
loss = lossfun(*args, **kwds)
File "/cluster/home/chenjinjie/AU_RCNN_fuxian/AU_R-CNN_allproj/AU_rcnn/links/model/faster_rcnn/faster_rcnn_train_chain.py", line 78, in call
features = self.faster_rcnn.extractor(imgs)
File "/cluster/home/chenjinjie/AU_RCNN_fuxian/AU_R-CNN_allproj/AU_rcnn/links/model/faster_rcnn/faster_rcnn_resnet101.py", line 323, in call
h = self.bn1(self.conv1(x))
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/link.py", line 294, in call
out = forward(*args, **kwargs)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/links/connection/convolution_2d.py", line 184, in forward
groups=self.groups)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/functions/connection/convolution_2d.py", line 589, in convolution_2d
y, = fnode.apply(args)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/function_node.py", line 321, in apply
outputs = self.forward(in_data)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/function_node.py", line 512, in forward
return self.forward_gpu(inputs)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/functions/connection/convolution_2d.py", line 189, in forward_gpu
return self._forward_cudnn(x, W, b, y)
File "/cluster/home/chenjinjie/.conda/envs/AUROI/lib/python3.6/site-packages/chainer/functions/connection/convolution_2d.py", line 250, in _forward_cudnn
auto_tune=auto_tune, tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 1575, in cupy.cudnn.convolution_forward
File "cupy/cuda/cudnn.pyx", line 1208, in cupy.cuda.cudnn.convolutionForward
File "cupy/cuda/cudnn.pyx", line 712, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_EXECUTION_FAILED

同学你好,我在运行训练代码时出现了上述bug,用的一个gpu,库版本如下:
chainer=6.3.0
cuda=9.0
cupy-cuda90 =6.3.0

你有遇到过这个问题吗,谢谢。

about mean pixel file

Hi, do BP4D dataset and DISFA dataset share the same mean pixel file as you provide in 'pixel's mean value file'?

How can I access your pre-trained model and image data?

I tested your model using following command:

python train.py --eval_mode -bs 128 -e 1 --iteration 1

but encountered an error:

Traceback (most recent call last):
File "train.py", line 460, in
main()
File "train.py", line 267, in main
observation = evaluator.evaluate()
File "/home/data3/fanglin/AU_R-CNN/AU_rcnn/extensions/AU_evaluator.py", line 50, in evaluate
for idx, batch in enumerate(it):
File "/usr/share/anaconda3/envs/enc/lib/python3.6/site-packages/chainer/iterators/serial_iterator.py", line 70, in next
self._previous_epoch_detail = self.epoch_detail
File "/usr/share/anaconda3/envs/enc/lib/python3.6/site-packages/chainer/iterators/serial_iterator.py", line 96, in epoch_detail
return self.epoch + self.current_position / self._epoch_size
ZeroDivisionError: division by zero

All dependencies are installed and all needed files are put in the right place

Using pretrained model

Hello,
It seems like it not really clear how to use the pre-trained models, what is are the features fed the to model directly?
If its possible to add a part in the readme file to clarify the usage?

I need to extract action units from video, so i wanted to implement it using your pretrained model from both datasets.

关于lstm_end_to_end.py的一些问题,想要请教

大佬,我最近在学习您的这个时空网络,想要复现一下,但是关于您的模型设定一直搞不明白,尤其是关于enum_type.py中的每个模式具体的作用是如何设定和起作用的不是很懂,想向您请教一下。比如我如果单纯的想要通过resnet101提取特征,然后通过lstm进行时间维度的处理,那在参数这里要如何设定呢?大佬,您能指教一下吗?万分感谢!

如下:
parser.add_argument('--spatial_edge_mode', type=SpatialEdgeMode, choices=list(SpatialEdgeMode),
help='1:all_edge, 2:configure_edge, 3:no_edge')

parser.add_argument('--spatial_sequence_type', type=SpatialSequenceType, choices=list(SpatialSequenceType),
help='1:all_edge, 2:configure_edge, 3:no_edge')

parser.add_argument('--temporal_edge_mode', type=TemporalEdgeMode, choices=list(TemporalEdgeMode),
help='1:rnn, 2:attention_block, 3.point-wise feed forward(no temporal)')

parser.add_argument('--two_stream_mode', type=TwoStreamMode, choices=list(TwoStreamMode),
help='spatial/ temporal/ spatial_temporal')

parser.add_argument('--conv_rnn_type', type=ConvRNNType, choices=list(ConvRNNType),
help='conv_lstm or conv_sru')

Unable to run demo

Traceback (most recent call last):
File "demo_AU_rcnn.py", line 255, in
main()
File "demo_AU_rcnn.py", line 171, in main
roi_image = orig_face[:, y_min:y_max+1, x_min:x_max+1] # N, 3, roi_H, roi_W
TypeError: slice indices must be integers or None or have an index method

Unable to run demo

Excuse me. Would you tell me how to run your test program? Thank you very much!
I had to try to run /AU_rcnn/train.py' but program throws error message like that:
Traceback (most recent call last):
File "/Users/xxx/Downloads/AU_R-CNN-master/AU_rcnn/train.py", line 6, in
from AU_rcnn.links.model.faster_rcnn.faster_rcnn_vgg19 import FasterRCNNVGG19
ModuleNotFoundError: No module named 'AU_rcnn'

save and load all of the parameter

Dear Mr
I ran your code(AURCNN) step by step you told in github but after training,i see loss function doesn't descend and have random behavior.i just changed batch size from 20 to 7 because of out of memory error in runtime. and accuracy was zero all the time of training .
can you help me why this happening?
and another thing
this model is very big and the time of training is about 20 days in system with gpu 1080 ti.i sometimes stop and resume training but all of the parameters doesn't save like epoch and when i stop in epoch 6,and then run again,project run from epoch 1.can you please change your code so that this problem solved?
thanks a lot

Unable to start train

Hi,

when i try to start training obtaining this error

Exception in main training loop: division by zero
Traceback (most recent call last):
File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/trainer.py", line 313, in run
while not stop_trigger(self):

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/triggers/interval_trigger.py", line 54, in call
epoch_detail = updater.epoch_detail

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 111, in epoch_detail
return self._iterators['main'].epoch_detail

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/iterators/serial_iterator.py", line 96, in epoch_detail
return self.epoch + self.current_position / self._epoch_size
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):

File "AU_rcnn/train.py", line 458, in
main()

File "AU_rcnn/train.py", line 449, in main
trainer.run()

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/trainer.py", line 349, in run
six.reraise(*exc_info)

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/six.py", line 719, in reraise
raise value

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/trainer.py", line 313, in run
while not stop_trigger(self):

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/triggers/interval_trigger.py", line 54, in call
epoch_detail = updater.epoch_detail

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 111, in epoch_detail
return self._iterators['main'].epoch_detail

File "/home/d.decicco/.conda/envs/mml_2/lib/python3.7/site-packages/chainer/iterators/serial_iterator.py", line 96, in epoch_detail
return self.epoch + self.current_position / self._epoch_size

ZeroDivisionError: division by zero

I believe it happens because it doesn't see the dataset.

I have downloaded full BP4D dataset and i have putted it in ROOT_PATH + "/BP4D/"

Then, in ROOT_PATH + "/BP4D/" i have F001 - F002... and so on.

What do you think about error?

This is the output that i have obtained

FaceLandMark init call! /home/d.decicco/dataset/landmark_model/shape_predictor_68_face_landmarks.dat
chainer cudnn enabled: True
GPU: 0
loading mean_file in: /home/d.decicco/datasetBP4D/idx/mean_rgb.npy done
load pretrained file: /home/d.decicco/dataset/caffe_model/ResNet-101-model.npz done
directory interessata =/home/d.decicco/dataset/BP4D/
idfile:/home/d.decicco/dataset/BP4D//idx/3_fold/id_trainval_1.txt
read id file done, all examples:0
only one GPU(0) updater

what

Hi, what is stored in this path in AU_rcnn/train.py file.

Problem in loading Pretrained Snapshot

Hi, I tried to execute demo_AU_RCNN.py, where there is a code to load the pre-trained snapshot using chainers, I end up getting an error,

File "demo_AU_rcnn.py", line 28, in init
first_fc_weight = weight_param['head/fc/W'] # shape = (1000, 2048)
File "/home/wse1kor/anaconda3/envs/leni/lib/python3.7/site-packages/numpy/lib/npyio.py", line 266, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'head/fc/W is not a file in the archive'

please help me out.

demo不能跑

代码中需要改得地方readme里没有说明,不知道怎么改

Modifying optical flow for two-stream network

Hello, thanks a lot for uploading your paper implementation!

I was wondering if it's possible for me to modify the source code such that I can provide my own optical flow data for the BP4D dataset, and if so where I can do so?

Thanks a lot for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.