Git Product home page Git Product logo

2s-agcn's People

Contributors

lshiwjx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2s-agcn's Issues

The dataset link

Hi,
I am trying to download the data. Can you please tell the correct data which we should download to process your code? The hyperlinks given in ReadMe donot work.

ensemble problem

您好,我想请教一下为什么我融合的结果完全不对,在joint和bone上分别训练都有93%以上的准确率,融合之后结果1.7%,ensemble.py文件没有改动,不知道是哪里出了问题。希望能得到您的回复,非常感谢!
微信图片_20190727201517

OSError: [Errno 22] Invalid argument

When I run python main.py --config ./config/nturgbd-cross-view/train_joint.yaml.py ,gets the error : OSError: [Errno 22] Invalid argument
And data processing has already completed.
I would appreciate it if you could help me!

problem with gen_bone_data.py

你好,请问一下我跑gen_bone_data.py时报错,好像是矩阵的维度有问题,该怎么解决,谢谢
lin@lin:~/2s-AGCN-master/data_gen$ python gen_bone_data.py
ntu/xsub train
4%|█▋ | 1/25 [06:49<2:43:40, 409.20s/it]Traceback (most recent call last):
File "gen_bone_data.py", line 50, in
fp_sp[:, :, :, v1, :] = data[:, :, :, v1, :] - data[:, :, :, v2, :]
IndexError: index 20 is out of bounds for axis 3 with size 18
4%|█▋ | 1/25 [06:49<2:43:45, 409.41s/it]

Difference in reported accuracy!

I ran your code in following setting:
./config/nturgbd-cross-view/train_joint.yaml
But I just got 90.3% for cross view evaluation, which is 93.7% in your paper.

trained model

dear,
Can anyone give me the trained model because my pc is weak, no gpu?
Thannk you so much!

ModuleNotFoundError: No module named 'data_gen'

python ./data_gen/ntu_gen_preprocess.py
Traceback (most recent call last):
File "./data_gen/ntu_gen_preprocess.py", line 7, in
from data_gen.preprocess import pre_normalization
ModuleNotFoundError: No module named 'data_gen'

Segmentation Fault (core dumped)

When I extract bone data file for kinetics, I got segmentation fault (core dumped) error for kinetics_train.
It is ok for kinetics_val. How to fix this?

代码中的几个地方似乎和文献中描述的对应不上

  1. 初始化方法
    文献中说两个embedding用的卷积参数是全0初始化,但代码中不是
  2. attention邻接矩阵C的计算方法
    文献中提到用softmax,但代码中换成了tanh(上一版代码仍然是softmax)
  3. STC模块的结合方式
    有两种方式,add和concate,按文献中的结果,concate是优于add的,但concate势必会多一层FC层或conv层来调整通道数,代码中似乎完全没有concate相关部分
  4. 和上一版代码相比,多了warm up epoch,这部分贡献如何,也没有说明

个人认为任何一个细节都可能影响最后的结果,而代码中有太多文献里没有提及的改动,因此不太容易验证文章的ablation实验是否正确

Accuracy of aagcn

I ran your implemented code using J-AAGCN and NTU-RGBD CV dataset.
But Accuracy is 94.64, not 95.1 in your paper.
What is the difference?
The batch size was 32, not 64 because of the resource limit.
Are there any other things to be aware of? I use your implemented code.

What's the point of adding y = none to the agcn.py file

    **y = None**
    for i in range(self.num_subset):
        A1 = self.conv_a[i](x).permute(0, 3, 1, 2).contiguous().view(N, V, self.inter_c * T)
        A2 = self.conv_b[i](x).view(N, self.inter_c * T, V)
        A1 = self.soft(torch.matmul(A1, A2) / A1.size(-1))  # N V V
        A1 = A1 + A[i]
        A2 = x.view(N, C * T, V)
        z = self.conv_d[i](torch.matmul(A2, A1).view(N, C, T, V))
        y = z + y if y is not None else z

Error Loading pretrained weights

Hi. Thank you for posting the code and weights. I get errors loading your weights:
Setup:
Python: 3.6
Pytorch: 0.3.1 on cuda 9.0.

I have download NTU-RGBD dataset from official site and preprocessed using your scripts ntu_gendata.py followed by gen_bone_data.py

when i run test script as follows:
python main.py --config ./config/nturgbd-cross-view/test_bone.yaml --weights pretrained_weights/ntu_cv_agcn_bone-49-29400.pt --save-score 1 --device 0 1

I get the following error:
KeyError: 'unexpected key "l1.gcn1.conv_res.0.weight" in state_dict'

Similarly, on running:
python main.py --config ./config/nturgbd-cross-subject/test_bone.yaml --weights pretrained_weights/ntu_cs_agcn_bone-49-31300.pt --save-score 1 --device 0 1 2

I get the following error:
KeyError: 'unexpected key "l1a.PA" in state_dict'

Please guide me if this is a problem with weights or my setup?

Thank you

dataload error

thank your source code, but when I run this code, The following error occurs:
ValueError: num_samples should be a positive integer value, but got num_samples=0

I've run the program 'python data_gen/ntu_gendata.py 'before, and some documents were generated :
train_data_joint.npy
train_label.pkl
val_data_joint.npy
val_label.pkl

but their size are all 1K

How should I deal with, trouble you give directions.

thanks

Train_data_bone file for Kinetics?

When extract train_data_bone.npy file for kinetics, there is segmentation fault error. So, could you provide this file in this github or some link?

augmentation in feeder

Hi,
I want to know the data augmentation in the feeder has not improved? Does the length of the input have a big influence?
Also, have you trained the model on the 120 dataset? How's the accuracy?

请问论文中的图8,图9是怎么画的?

图8,图9画出了骨架,并用大小圈代表了关节点连接的strength,感觉看上去特别直观。我从您github公开的代码中没看到相关画图的代码,所以向您请教一下,您是怎么画出来的?有没有可以分享的代码呢?如能不吝赐教,将不胜感激!谢谢!

关于三邻接矩阵在 NTU CS上的ablation实验结果

《Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition》(cvpr,
2019)中没有给出三邻接矩阵ABC在NTU RGB+D cross subject情况下的ablation实验,
《Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks》(arxiv,2019) 中补全了相应的ablation实验,然而实验结果似乎并不能复现
QQ截图20200117143503

用agcn文件来复现包含三个邻接矩阵(ABC)的joint stream结果,最终为86. 17%,无法达到86.6%

The problems of data pre-processing

Dear Colleagues,

For Data Preparation, when I tried to run the code "python data_gen/ntu_gendata.py", the code run the file of “preprocess.py”, and run the function "pre_normalization(data, zaxis=[0, 1], xaxis=[8, 4]):"

It has encountered the following problems:

pad the null frames with the previous frames
0%| | 0/40091 [00:00<?, ?it/s]
Traceback (most recent call last):
File "ntu_gendata.py", line 166, in
part=p)
File "ntu_gendata.py", line 140, in gendata
fp = pre_normalization(fp)
File "../data_gen/preprocess.py", line 30, in pre_normalization
s[i_s, i_p, i_f:] = pad
ValueError: could not broadcast input array from shape (186,25,3) into shape (238,25,3)

What is the reason?
And how to fix it?

Thanks
LCW

请问什么时候会release预训练好的模型呢?

您好,非常感谢您的对这份工作的开源。
如题,在测试的时候,我发现没有与训练的模型,因此需要自己重新训练。
所以想问一下,什么时候会release预训练好的模型呢?
再次感谢!

I have a few questions.

I have a few questions.
1、Why the training and test is divided into the following two steps and then fusion?I saw the data only train_bone should only be the bone point.

python main.py --config ./config/nturgbd-cross-view/train_joint.yaml
python main.py --config ./config/nturgbd-cross-view/train_bone.yaml

2、Can you provide the pre-training model?

3、How long have you been training?

4、The accuracy rate of my training on Kinetics-skeleton dataset is only 26%.How can I import it.

I got some wrong when I was training the net

首先我是得到了下面这个error,
1

注释掉该参数后,got another error

I got this error ,but I don't know how to solve. Could you give me some advice?

Traceback (most recent call last):
File "/home/sues/Desktop/2s-AGCN-master/main.py", line 550, in
processor.start()
File "/home/sues/Desktop/2s-AGCN-master/main.py", line 491, in start
self.train(epoch, save_model=save_model)
File "/home/sues/Desktop/2s-AGCN-master/main.py", line 372, in train
loss.backward()
File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-pac[kages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:26
/pytorch/torch/lib/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes
2

ValueError: invalid literal for int() with base 10: ''

Thank you for share the code.

When the command: python ntu_gendata.py
resluts:
xsub train
96%|██████████████████████████████████▍ | 24466/25614 [03:52<00:11, 104.06it/s]Traceback (most recent call last):
File "ntu_gendata.py", line 166, in
part=p)
File "ntu_gendata.py", line 137, in gendata
data = read_xyz(os.path.join(data_path, s), max_body=max_body_kinect, num_joint=num_joint)
File "ntu_gendata.py", line 74, in read_xyz
seq_info = read_skeleton_filter(file)
File "ntu_gendata.py", line 25, in read_skeleton_filter
skeleton_sequence['numFrame'] = int(f.readline())
ValueError: invalid literal for int() with base 10: ''
96%|██████████████████████████████████▍ | 24466/25614 [03:52<00:10, 105.40it/s]

pytorch 0.4.0 cuda8 cudnn5.1 ubuntu 16.04

Needs your help!
Thanks a lot.

Error go into std when i want to run the code on my machine:OverflowError: cannot serialize a bytes object larger than 4 GiB

I want to run this code on my machine with pytorch1.2.0. I got this
File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1758, in <module> main() File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1752, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1147, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/Documents/PycharmProjects/2s-AGCN-master/main.py", line 549, in <module> processor.start() File "D:/Documents/PycharmProjects/2s-AGCN-master/main.py", line 490, in start self.train(epoch, save_model=save_model) File "D:/Documents/PycharmProjects/2s-AGCN-master/main.py", line 351, in train for batch_idx, (data, label, index) in enumerate(process): File "D:\ProgramData\Anaconda3\lib\site-packages\tqdm\std.py", line 1081, in __iter__ for obj in iterable: File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 278, in __iter__ return _MultiProcessingDataLoaderIter(self) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 682, in __init__ w.start() File "D:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "D:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "D:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OverflowError: cannot serialize a bytes object larger than 4 GiB Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

Training strategy of graph topology

Hi @lshiwjx,
In the MS-AAGCN paper, it is mentioned that the training strategy for learning graph topology is initializing the B_k with A_k and blocking the propagation of the gradient for B_k at the early stage of the training process until the training stabilizes.
Is it implemented in the current released code? If not, can you please guide me how to do this? Thanks!

Memory overloading issue

First of all, thanks a lot for making your code public. I am trying to do the experiment on NTU RGB D 120 dataset and I have split the data into training and testing in CS as given in the NTU-RGB D 120 paper. I have 63026 training samples and 54702 testing samples. I am trying to train the model on a GPU cluster but after running for one epoch, my model exceeds the memory limit:
image
I try to clear the cache explicitly using gc.collect but the model still continues to grow in size.
It will be great if you can help regarding this.

Update Pytorch

dear,
Can you update pytorch to 1.0?
I can not install pytorch=0.3 cuda101.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.