Git Product home page Git Product logo

vmz's People

Contributors

ankitshah009 avatar chjoanna avatar dutran avatar mounirb avatar ohtake avatar orionr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vmz's Issues

Error while extracting features

WARNING:caffe2.python.workspace:Original python traceback for operator 1 in network Extract Features_init in exception above (most recent call last):
WARNING:caffe2.python.workspace:  File "tools/extract_features.py", line 292, in <module>
WARNING:caffe2.python.workspace:  File "tools/extract_features.py", line 287, in main
WARNING:caffe2.python.workspace:  File "tools/extract_features.py", line 118, in ExtractFeatures
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/data_parallel_model.py", line 32, in Parallelize_GPU
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/data_parallel_model.py", line 209, in Parallelize
WARNING:caffe2.python.workspace:  File "tools/extract_features.py", line 110, in create_model_ops
WARNING:caffe2.python.workspace:  File "/home/ubuntu/dedup/R2Plus1D/lib/models/model_builder.py", line 128, in build_model
WARNING:caffe2.python.workspace:  File "/home/ubuntu/dedup/R2Plus1D/lib/models/r3d_model.py", line 99, in create_model
WARNING:caffe2.python.workspace:  File "/home/ubuntu/dedup/R2Plus1D/lib/models/r3d_model.py", line 145, in create_r3d
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/cnn.py", line 86, in ConvNd
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/brew.py", line 107, in scope_wrapper
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/helpers/conv.py", line 164, in conv_nd
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/helpers/conv.py", line 88, in _ConvBase
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/model_helper.py", line 216, in create_param
WARNING:caffe2.python.workspace:  File "/home/ubuntu/pytorch/build/caffe2/python/modeling/initializers.py", line 30, in create_param
Traceback (most recent call last):
  File "tools/extract_features.py", line 292, in <module>
    main()
  File "tools/extract_features.py", line 287, in main
    ExtractFeatures(args)
  File "tools/extract_features.py", line 121, in ExtractFeatures
    workspace.RunNetOnce(model.param_init_net)
  File "/home/ubuntu/pytorch/build/caffe2/python/workspace.py", line 201, in RunNetOnce
    StringifyProto(net),
  File "/home/ubuntu/pytorch/build/caffe2/python/workspace.py", line 180, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at operator.cc:209] op. Cannot create operator of type 'MSRAFill' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "gpu_0/conv1_middle_w" name: "" type: "MSRAFill" arg { name: "shape" ints: 45 ints: 3 ints: 1 ints: 7 ints: 7 } device_option { device_type: 1 cuda_gpu_id: 0 }

I'm on Ubuntu 16.04. This happens when I run the following:

python tools/extract_features.py --test_data=../dups_lmdb_data --model_name=r2plus1d --model_depth=34 --clip_length_rgb=32 --batch_size=4 --load_model_path=./trained/r2.5d_d34_l32_ft_sports1m.pkl --output_path=dups_features.pkl --features=softmax,final_avg,video_id --sanity_check=0 --get_video_id=1 --use_local_file=1 --num_labels=400

I'm on a AWS general purpose compute machine (doesn't have GPU).

error when finetune hmdb51

Hi,
When I tried to finetune the pre-trained model on hmdb51 by running the scripts "tools/train_net.py", I got this error over and over again and it never continues to the training part. It seems like there is something wrong with the video processing or the video itself. I'm so confused and don't know what to do. Did I miss something? Should I pre-process the video like what we do with kinects?

INFO:data_parallel_model:Creating checkpoint synchronization net
INFO:data_parallel_model:Run checkpoint net
INFO:train_net:Starting epoch 0/8
[mpeg4 @ 0x7d39cc080240] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39c00aaca0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39ac055960] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39bc172880] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[swscaler @ 0x7d39cc0a3e40] Warning: data is not aligned! This can lead to a speedloss
[mpeg4 @ 0x7d390c0a91a0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3918018f20] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d394c0f9ca0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3928018dc0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d393c0d8d60] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3964018440] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3974018740] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d397c098aa0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d399400b0e0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39840191c0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d38fc017cc0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d397008a160] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39cc0a2500] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39c00add20] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39bc120b60] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39ac07ebe0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3980118660] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d38341e98c0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d395c15f560] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d39081e9fa0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d399412f5c0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3980017be0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d38340ccf20] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3928018a40] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d394c0eb1a0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d393c0b95a0] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
[mpeg4 @ 0x7d3974017380] Video uses a non-standard and wasteful way to store B-frames ('packed B-
........

What happens when I pass a different crop size during feature extraction then the crop size used in the training?

Hi,

I am using the pre-trained models provided in this repo, to extract features. I have a couple of questions regarding the crop-size argument:

1). If I let the crop-size be the default (i.e 112) then it crops the video to that size instead of resizing, right?
2). What If I don't want to crop the video because my video's dimension is way larger. Then what happens if I pass a different value to the crop-size during feature extraction?

TIA,
Rahul

Error when finetune UCF101

Hi,

I'm trying to finetune using the script "finetune_ucf101.sh" I follow the tutorial "https://github.com/facebookresearch/R2Plus1D/blob/master/tutorials/hmdb51_finetune.md".

Follow the errors:
Traceback (most recent call last):
File "tools/train_net.py", line 501, in
main()
File "tools/train_net.py", line 496, in main
Train(args)
File "tools/train_net.py", line 280, in Train
net_type=('prof_dag' if args.profiling == 1 else 'dag'),
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/data_parallel_model.py", line 31, in Parallelize_GPU
Parallelize(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/data_parallel_model.py", line 207, in Parallelize
input_builder_fun(model_helper_obj)
File "tools/train_net.py", line 268, in add_video_input
use_local_file=args.use_local_file,
File "/home/murilo/R2Plus1D/lib/utils/model_helper.py", line 127, in AddVideoInput
data, label = model.net.VideoInput(
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 2054, in getattr
",".join(workspace.C.nearby_opnames(op_type)) + ']'
AttributeError: Method VideoInput is not a registered operator. Did you mean: []

Kinetics preprocessing

In the tutorial of training kinetics, we need to pre-process the video as clips(of 10second). Since the number of video is huge and the total time of each video varies, it may take time to finish the preprocessing.
can you provide your script for this?

Error occured when trying to finetune the pre-trained model on hmdb51

Hey,
I am trying to finetune the model on hmdb, but when I run the scripts tools/train_net.py, an error occured. Do you know how to fix it? Which version of caffe2 do you use, the official caffe2 or your personal fork of caffe2?

Traceback (most recent call last):
File "train_net.py", line 501, in
main()
File "train_net.py", line 496, in main
Train(args)
File "train_net.py", line 280, in Train
net_type=('prof_dag' if args.profiling == 1 else 'dag'),
File "/home/wenjie/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 32, in Parallelize_GPU
Parallelize(*args, **kwargs)
File "/home/wenjie/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 208, in Parallelize
input_builder_fun(model_helper_obj)
File "train_net.py", line 268, in add_video_input
use_local_file=args.use_local_file,
File "/home/wenjie/R2Plus1D/utils/model_helper.py", line 127, in AddVideoInput
data, label = model.net.VideoInput(
File "/home/wenjie/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/core.py", line 2067, in getattr
",".join(workspace.C.nearby_opnames(op_type)) + ']'
AttributeError: Method VideoInput is not a registered operator. Did you mean: []

Detection of events in sport games

Hello everybody and thanks for this nice work !

I plan on implementing an event detection system that is capable of detecting events in basketball. Do you guys think this work might be able to do such detections in high res video ?

Thanks for any suggestions !

Best
Alexander

AttributeError: Method VideoInput is not a registered operator. Did you mean: []

Hi,
Thank you for the open source code provided! I have installed gpu support as per the tutorial, and I get an error when I execute the script that extracts features.

AttributeError: Method VideoInput is not a registered operator. Did you mean: []

I tried to debug or reported the same error, I hope to get your help!

Unclear structure of the list file that is required for feature extraction

The file structure for list file that is required for feature extraction is:
org_video,label,start_frm,video_id

I am confused about the video_id here. According to the feature extraction tutorial each clip has a different video_id, but according to the list files available on the dropbox here, all the clips from the same video should have the same video_id. I personally think that the latter should be correct.

Can you help me here? Also, why do the files on that dropbox link has start_frm difference of 1 frame? That's like each frame is considered as a clip, right?

Thanks

caffe equivalent of iter_size in caffe2?

Iter_size in caffe solver enables to perform backpropagation after a number of forward passes defined by the value of iter_size. Hence, allowing an increased batch size without GPU memory restriction.
Is there an equivalent in caffe2 that can be used with optimizer.build_sgd?

Feature vector dimensions

Hi everyone,
The paper says that dimensions of fc layer is 400 for kinetics and 512 for pooling. Is that the right ? Does any model offers 4096 dimensions feature vector extraction ? Can we perform feature extraction of own videos on this network for other applications use?
Thanks

What exactly is top5 accuracy?

Hi,

I couldn't find a clear idea of top5 accuracy based on the paper. And based on my understanding from the code, it looks like it's the accuracy by considering the prediction as correct if any of the top 5 softmax values correspond to the correct label. Can someone help me verify this? And help me get the clear idea of the top5 accuracy?

Thanks,
Rahul Bhojwani

download the models

thank you to share the project for us, but i am sorry to say i couldn't download the pre-trained model, could you give me another way to download it, thanks again

Warnings/error when finetune UCF101

Hi,

I'm trying to reproduce UCF101 results, but when I try to fine tune in UCF101 dataset some messages are displayed and the accuracy is very low.

Displayed Messages:
1. Invalid return value 0 for stream protocol
2. [NULL @ 0x7dcac81069c0] Failed to parse extradata
3. [mpeg4 @ 0x7dcacc0a3600] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.

File output:
https://pastebin.com/rqhZtxSD

Cannot create operator of type 'MSRAFill' on the device 'CUDA'

I noticed there are two similiar issues, but the difference is I have GPU. I install my caffe2 referring to https://github.com/facebookresearch/R2Plus1D/blob/master/tutorials/Installation_guide.md , my error message is as follows.

Traceback (most recent call last):
File "/data/xiongcx/R2Plus1D/R2Plus1D-master/tools/train_net.py", line 509, in
main()
File "/data/xiongcx/R2Plus1D/R2Plus1D-master/tools/train_net.py", line 504, in main
Train(args)
File "/data/xiongcx/R2Plus1D/R2Plus1D-master/tools/train_net.py", line 344, in Train
workspace.RunNetOnce(test_model.param_init_net)
File "/home/xiongcx/pytorch/build/caffe2/python/workspace.py", line 201, in RunNetOnce
StringifyProto(net),
File "/home/xiongcx/pytorch/build/caffe2/python/workspace.py", line 180, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at operator.cc:212] op. Cannot create operator of type 'MSRAFill' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "gpu_0/conv1_middle_w" name: "" type: "MSRAFill" arg { name: "shape" ints: 45 ints: 3 ints: 1 ints: 7 ints: 7 } device_option { device_type: 1 cuda_gpu_id: 0 }

Do you have any ideas?

(2+1)D Convolution implementation

I want to use (2+1)D Convolution in my work, but i cannot find how to implement it, can you please guid me to the correct path to learn it?
Thanks

extract_features does not produce features for all videos due to error in calculation of num_iterations

I was trying to extract video descriptors from my videos using tools/extract_features.py.

I found the number of features in the output pickle file to be less than the number of videos in the lmdb file created using data/create_video_db.py.

After a bit of debugging, I suspect there is an error in the way num_terations is calculated here in the the ExtractFeatures function.

examples_per_iteration = args.batch_size * num_gpus
num_iterations = int(num_examples / examples_per_iteration)

For example let us say there are 84 videos in the lmdb file, and we use a batch_size=4 and num_gpus=2 then the current code shown above will set num_iterations = int(84 / (4 * 2)) = 10 and the output pickle file will contain only num_iterations * examples_per_iteration = 10 * 8 = 80 feature vectors instead of 84.

To rectify this, num_iterations should be calculated as follows:

num_iterations = int(np.ceil(float(num_examples) / examples_per_iteration))

This will set num_iterations to 11 instead of 10 and produce 88 feature vectors. I am trying to figure out how to ignore the 4 extra/dummy feature vectors it produces potentially based on the video_id

build error

I have done exact steps following this installation_guide, but when executing sudo make -j8 install , error occurs

[ 11%] Built target libprotobuf
Scanning dependencies of target common
Scanning dependencies of target mkrename
Traceback (most recent call last):
File "/home/hope/pytorch/cmake/../aten/src/ATen/gen.py", line 5, in
import yaml
ImportError: No module named yaml
caffe2/CMakeFiles/ATEN_CPU_FILES_GEN_TARGET.dir/build.make:132: recipe for target 'aten/src/ATen/CPUByteType.cpp' failed
make[2]: *** [aten/src/ATen/CPUByteType.cpp] Error 1
CMakeFiles/Makefile2:1606: recipe for target 'caffe2/CMakeFiles/ATEN_CPU_FILES_GEN_TARGET.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/ATEN_CPU_FILES_GEN_TARGET.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 11%] Building C object sleef/src/common/CMakeFiles/common.dir/common.c.o
[ 11%] Building C object sleef/src/libm/CMakeFiles/mkrename.dir/mkrename.c.o
[ 11%] Built target common
[ 11%] Linking C executable ../../bin/mkrename
[ 11%] Built target mkrename
[ 11%] Built target python_copy_files
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2

Ubuntu16.04
CUDA 8.0
CUDNN 6

ImportError: No module named models.model_builder

HiīŧŒ
I have built the environment according to the tutorial. I want to do a test with two small videos. I created the lmdb database through the script, but when I executed the script to extract the features, I made an error. I tried to modify the path and still failed. Solve, hope to get your help, thanks.

ThinkStation:~/R2Plus1D-master$ sh scripts/extract_feature_myvideos.sh
Traceback (most recent call last):
File "tools/extract_features.py", line 21, in
import models.model_builder as model_builder
ImportError: No module named models.model_builder

SIGABRT while running extract features

ubuntu@ip-172-31-14-53:~/R2Plus1D$ python tools/extract_features.py --test_data=dupes_data --model_name=r2plus1d --model_depth=34 --clip_length_rgb=32 --gpus=0,1 --batch_size=4 --load_model_path=./trained/r2.5d_d34_l32_ft_sports1m.pkl --output_path=my_features.pkl --features=softmax,final_avg,video_id --sanity_check=0 --get_video_id=1 --use_local_file=1 --num_labels=400
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/lil.py:19: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import _csparsetools
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:165: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._shortest_path import shortest_path, floyd_warshall, dijkstra,\
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/_validation.py:5: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._tools import csgraph_to_dense, csgraph_from_dense,\
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:167: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._traversal import breadth_first_order, depth_first_order, \
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:169: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._min_spanning_tree import minimum_spanning_tree
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:170: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._reordering import reverse_cuthill_mckee, maximum_bipartite_matching, \
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/spatial/__init__.py:95: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from .ckdtree import *
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/spatial/__init__.py:96: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from .qhull import *
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/spatial/_spherical_voronoi.py:18: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import _voronoi
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/spatial/distance.py:122: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import _hausdorff
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/linalg/basic.py:17: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._solve_toeplitz import levinson
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/linalg/__init__.py:207: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._decomp_update import *
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/special/__init__.py:640: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._ufuncs import *
/home/ubuntu/.local/lib/python2.7/site-packages/scipy/special/_ellip_harm.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._ellip_harm_2 import _ellipsoid, _ellipsoid_norm
Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
E0813 11:33:53.224505 16124 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0813 11:33:53.224771 16124 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0813 11:33:53.224789 16124 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
Namespace(batch_size=4, clip_length_of=8, clip_length_rgb=32, clip_per_video=1, crop_size=112, db_type='pickle', decode_type=2, do_flow_aggregation=0, features='softmax,final_avg,video_id', flow_data_type=0, frame_gap_of=2, get_video_id=1, gpus='0,1', input_type=0, load_model_path='./trained/r2.5d_d34_l32_ft_sports1m.pkl', model_depth=34, model_name='r2plus1d', num_channels=3, num_decode_threads=4, num_iterations=-1, num_labels=400, output_path='my_features.pkl', sampling_rate_of=2, sampling_rate_rgb=1, sanity_check=0, scale_h=128, scale_w=171, test_data='dupes_data', use_cudnn=1, use_local_file=1)
INFO:feature_extractor:Namespace(batch_size=4, clip_length_of=8, clip_length_rgb=32, clip_per_video=1, crop_size=112, db_type='pickle', decode_type=2, do_flow_aggregation=0, features='softmax,final_avg,video_id', flow_data_type=0, frame_gap_of=2, get_video_id=1, gpus='0,1', input_type=0, load_model_path='./trained/r2.5d_d34_l32_ft_sports1m.pkl', model_depth=34, model_name='r2plus1d', num_channels=3, num_decode_threads=4, num_iterations=-1, num_labels=400, output_path='my_features.pkl', sampling_rate_of=2, sampling_rate_rgb=1, sanity_check=0, scale_h=128, scale_w=171, test_data='dupes_data', use_cudnn=1, use_local_file=1)
INFO:model_builder:Validated: r2plus1d with 34 layers
INFO:model_builder:with input 32x112x112
Running on GPUs: [0, 1]
INFO:feature_extractor:Running on GPUs: [0, 1]
WARNING:root:[====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information.
WARNING:data_parallel_model:** Only 1 GPUs available, GPUs [0, 1] requested
INFO:data_parallel_model:Parallelizing model for devices: [0, 1]
INFO:data_parallel_model:Create input and model training operators
WARNING:data_parallel_model:
WARNING:data_parallel_model:############# WARNING #############
WARNING:data_parallel_model:Model Extract Features/<caffe2.python.cnn.CNNModelHelper object at 0x7fded641ff90> is used for testing/validation but
WARNING:data_parallel_model:has init_params=True!
WARNING:data_parallel_model:This can conflict with model training.
WARNING:data_parallel_model:Please ensure model = ModelHelper(init_params=False)
WARNING:data_parallel_model:####################################
WARNING:data_parallel_model:
INFO:data_parallel_model:Model for GPU : 0
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 1
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Parameter update function not defined --> only forward
terminate called without an active exception
*** Aborted at 1534160033 (unix time) try "date -d @1534160033" if you are using GNU date ***
PC: @     0x7fdf2acba428 gsignal
*** SIGABRT (@0x3e800003efc) received by PID 16124 (TID 0x7fdf2b473700) from PID 16124; stack trace: ***
    @     0x7fdf2b060390 (unknown)
    @     0x7fdf2acba428 gsignal
    @     0x7fdf2acbc02a abort
    @     0x7fdf1cb4684d __gnu_cxx::__verbose_terminate_handler()
    @     0x7fdf1cb446b6 (unknown)
    @     0x7fdf1cb44701 std::terminate()
    @     0x7fdf19267b00 caffe2::CUDAContext::~CUDAContext()
    @     0x7fdf19724c6e caffe2::FillerOp<>::~FillerOp()
    @     0x7fdf19770227 caffe2::MSRAFillOp<>::~MSRAFillOp()
    @     0x7fdf1b41df99 std::vector<>::~vector()
    @     0x7fdf1b43053f caffe2::SimpleNet::SimpleNet()
    @     0x7fdf1b431c8e _ZN6caffe210RegistererINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10unique_ptrINS_7NetBaseESt14default_deleteIS8_EEJRKSt10shared_ptrIKNS_6NetDefEEPNS_9WorkspaceEEE14DefaultCreatorINS_9SimpleNetEEESB_SH_SJ_
    @     0x7fdf1b3a9203 std::_Function_handler<>::_M_invoke()
    @     0x7fdf1b3d9bf2 caffe2::CreateNet()
    @     0x7fdf1b3da61d caffe2::CreateNet()
    @     0x7fdf1b3f9f12 caffe2::Workspace::RunNetOnce()
    @     0x7fdf1c0d4d08 _ZZN6caffe26python16addGlobalMethodsERN8pybind116moduleEENKUlRKNS1_5bytesEE28_clES6_.isra.3053.constprop.3166
    @     0x7fdf1c0d4eb4 _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKNS_5bytesEE28_bJS8_EJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESQ_
    @     0x7fdf1c10781e pybind11::cpp_function::dispatcher()
    @           0x4c30ce PyEval_EvalFrameEx
    @           0x4b9ab6 PyEval_EvalCodeEx
    @           0x4c1e6f PyEval_EvalFrameEx
    @           0x4b9ab6 PyEval_EvalCodeEx
    @           0x4c1e6f PyEval_EvalFrameEx
    @           0x4b9ab6 PyEval_EvalCodeEx
    @           0x4c1e6f PyEval_EvalFrameEx
    @           0x4b9ab6 PyEval_EvalCodeEx
    @           0x4c1e6f PyEval_EvalFrameEx
    @           0x4b9ab6 PyEval_EvalCodeEx
    @           0x4eb30f (unknown)
    @           0x4e5422 PyRun_FileExFlags
    @           0x4e3cd6 PyRun_SimpleFileExFlags
Aborted (core dumped)

Anyone else who has seen this issue? CC @dutran

FPS of video input

I was wondering if it is correct that you do not encode your video input in a specific FPS, but just keep the FPS that comes with each video? Does this mean that the models should work on video input with any FPS (within reasonable bounds of course)?

Thanks!

CMakeFiles/Makefile2:1401: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2

Hi all,
I am trying to set the conda environment for the project as mentioned in installation guide. I have been able to set everything till Build Caffe2.
After cloning pytorch, when I run cmake in build I get this summary. In summary it shows CUDA 9.2 and cudnn 7.1.4 and other details like opencv 3.4.0 and so on.
But down the line it shows following error

(1) [Could not find a package configuration file provided by "Eigen3" with any
of the following names:
Eigen3Config.cmake
eigen3-config.cmake
Add the installation prefix of "Eigen3" to CMAKE_PREFIX_PATH or set
"Eigen3_DIR" to a directory containing one of the above files. If "Eigen3"
provides a separate development package or SDK, be sure it has been
installed.]
Here is the summary of cmake ..

-- ******** Summary ********
-- General:
-- CMake version : 3.5.1
-- CMake command : /usr/bin/cmake
-- Git version : v0.1.11-9471-g0262fd0-dirty
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- BLAS : Eigen
-- CXX flags : -fvisibility-inlines-hidden -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local

-- BUILD_CAFFE2 : ON
-- BUILD_ATEN : OFF
-- BUILD_BINARY : ON
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.13
-- Python executable : /home/aafaq/yes/envs/pythorch1/bin/python
-- Pythonlibs version : 2.7.13
-- Python library : /home/aafaq/yes/envs/pythorch1/lib/python2.7
-- Python includes : /home/aafaq/yes/envs/pythorch1/include/python2.7
-- Python site-packages: lib/python2.7/site-packages
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : OFF
-- USE_ASAN : OFF
-- USE_ATEN : OFF
-- USE_CUDA : ON
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- CUDA version : 9.2
-- cuDNN version : 7.1.4
-- CUDA root directory : /usr/local/cuda-9.2
-- CUDA library : /usr/local/cuda-9.2/lib64/stubs/libcuda.so
-- cudart library : /usr/local/cuda-9.2/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
-- cublas library : /usr/local/cuda-9.2/lib64/libcublas.so;/usr/local/cuda-9.2/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda-9.2/lib64/libcufft.so
-- curand library : /usr/local/cuda-9.2/lib64/libcurand.so
-- cuDNN library : /usr/local/cuda-9.2/lib64/libcudnn.so
-- nvrtc : /usr/local/cuda-9.2/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda-9.2/include
-- NVCC executable : /usr/local/cuda-9.2/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : ON
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.18
-- Snappy version : 1.1.3
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : ON
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 3.4.0
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- Public Dependencies : Threads::Threads;gflags;glog::glog
-- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_videoio;opencv_video;/home/aafaq/yes/lib/libavcodec.so;/home/aafaq/yes/lib/libavformat.so;/home/aafaq/yes/lib/libavutil.so;/home/aafaq/yes/lib/libswscale.so;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;onnxifi_loader;gcc_s;gcc;dl
-- Configuring done
-- Generating done
-- Build files have been written to: /home/aafaq/yes/envs/pythorch1/pytorch/build

But down the line while installing it shows me this warning

/home/aafaq/yes/envs/pythorch1/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/aafaq/yes/envs/pythorch1/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."

However in the summary it showed that cudnn version is 7.1.4.

Lastly, It ends with this error
CMakeFiles/Makefile2:1401: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2

Desperately looking for solution.

Thanks

Regards

Parameter epoch_size greater than dataset size in train phase

Hi,

In the paper, you said "Although Kinetics has only about 240k training videos, we set epoch size to be 1M for temporal jittering".

I would like to know what will happen, the video reader will start from the begin of dataset? In this case will process the dataset about 4 times per epoch, in each time get a different clip for each video using temporal jittering?

I tried to find this information reading source codes in the folder pytorch/caffe2/video/. but I remain with doubt.

Thank's Murilo.

different test accuracy when test the same dataset

Hi,
@dutran, thank you for your great work.
I finetuned the r2+1d model on my own dataset using train_net.py, then I got the best test accuracy 0.72 and the corresponding model r2plus1d_3.mdl. However, when I use the same test dataset and the r2plus1_3.mdl to run test_net.py, the test accuracy is low, it is about 0.2. And I also tried to extract features using extract_features.py and then got the test accuracy using dense_prediction_aggregation.py. The test accuracy is low too, it is at most 0.11.
It makes me feel confused. Why is the test accuracy so different?
I know the value of the decode_type may influence the test accuracy, but I wonder if there are any other reasons that could affect the test accuracy? Could you give me some advice? Thank you.

Training .mdl to Predictor

Hey team,

Thanks for all your awesome work in Spatio-Temporal CNNs! I had a couple quick questions about going from a trained model (.mdl format) to a standalone predictor.

I was able to successfully build the environment and train the R(2+1)D model on my own dataset but now that I've got my trained .mdl file I've been struggling to find documentation on how to go from the trained/training model that takes in LMDB's to a 'predictor' model that can take in plain np array inputs.

I've currently got my .mdl loaded into an init_net and predict_net as suggested by much of the documentation I was able to find:

init_net proto:

input: "!!PREDICTOR_DBREADER"
output: "gpu_0/conv1_middle_w"
output: "gpu_0/conv1_middle_spatbn_relu_s"
output: "gpu_0/conv1_middle_spatbn_relu_b"
output: "gpu_0/conv1_w"
output: "gpu_0/conv1_spatbn_relu_s"
output: "gpu_0/conv1_spatbn_relu_b"
output: "gpu_0/comp_0_conv_1_middle_w"
output: "gpu_0/comp_0_spatbn_1_middle_s"
...
name: ""
type: "Load"

first two layers in predict_net proto:

input: "r2plus1d_train_init/CreateDB"
output: "gpu_0/data"
output: "gpu_0/label"
name: "data"
type: "VideoInput"
arg {
... bunch of VideoInput args ...
}
device_option {
  device_type: 1
  cuda_gpu_id: 0
}
input: "gpu_0/data"
output: "gpu_0/data"
name: ""
type: "StopGradient"
device_option {
  device_type: 1
  cuda_gpu_id: 0
}
{convolutional layers, etc...}

My questions are:
a) how would I go about modifying the net(s) to allow for individual predictions from raw np arrays? Ex:

data = np.random.rand(N, C, H, W) // aka np arrays of video frames
workspace.FeedBlob("gpu0_data", data)

and b) how would I switch from GPU to CPU execution? Is there something like workspace.RunAllOnCPU() for example?

Thanks!

What could be the reason of the decreasing training accuracy?

Hey,

I finetuned the pretrained Sports-1m R2Plus1D model on a dataset. And below is how the training and testing accuracy looks like. (Taken from the log that the tools/train_net.py generates on completion)

train-test accuracy

Can you help me understand what could be the reason for the training accuracy to decrease? Logically training accuracy should never decrease with increasing epochs, right?

segmentation fault while running the finetune UCF101

finetuning on hmdb51 works fine for me, but once I tried ucf the model gives me a segmentation fault at the very beginning of training process right after model loading.
I've downloaded the dataset and extracted it to the correct path,
then used scripts/create_ucf101_lmdb.sh to create the lmdbs, and then I ran scrips/finetune_ucf101
and I get this:

Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file.
E0610 17:44:44.454202 260 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0610 17:44:44.455108 260 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0610 17:44:44.455152 260 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
INFO:train_net:Namespace(base_learning_rate=0.0001, batch_size=1, clip_length_of=8, clip_length_rgb=32, crop_size=112, cudnn_workspace_limit_mb=64, db_type='pickle', display_iter=10, do_flow_aggregation=0, epoch_size=50000, file_store_path='.', flow_data_type=0, frame_gap_of=2, gamma=0.1, get_video_id=0, gpus='0,1,2,3', input_type=0, is_checkpoint=0, model_depth=34, model_name='r2plus1d', num_channels=3, num_decode_threads=4, num_epochs=16, num_gpus=1, num_labels=101, pred_layer_name=None, pretrained_model='/mnt/homedir/trandu/video_models/kinetics/l32/r2.5d_d34_l32.pkl', profiling=0, sampling_rate_of=2, sampling_rate_rgb=1, scale_h=128, scale_w=171, step_epoch=2, test_data='/data/users/trandu/datasets/ucf101_test01', train_data='/data/users/trandu/datasets/ucf101_train01', use_cudnn=1, use_dropout=0, use_local_file=0, weight_decay=0.005)
INFO:model_builder:Validated: r2plus1d with 34 layers
INFO:model_builder:with input 32x112x112
INFO:train_net:Running on GPUs: [0, 1, 2, 3]
INFO:train_net:Using epoch size: 50000
WARNING:root:[====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information.
INFO:train_net:train set has 0 examples
INFO:data_parallel_model:Parallelizing model for devices: [0, 1, 2, 3]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 1
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 2
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 3
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Add initial parameter sync
WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() -----
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.120590925217 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.118865966797 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.114645004272 secs
WARNING:memonger:NOTE: Executing memonger to optimize gradient memory
INFO:memonger:Memonger memory optimization took 0.114379882812 secs
INFO:train_net:----- Create test net ----
WARNING:root:[====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information.
INFO:train_net:test set has 3783 examples
INFO:data_parallel_model:Parallelizing model for devices: [0, 1, 2, 3]
INFO:data_parallel_model:Create input and model training operators
WARNING:data_parallel_model:
WARNING:data_parallel_model:############# WARNING #############
WARNING:data_parallel_model:Model r2plus1d_test/<caffe2.python.cnn.CNNModelHelper object at 0x7fb7426d3a90> is used for testing/validation but
WARNING:data_parallel_model:has init_params=True!
WARNING:data_parallel_model:This can conflict with model training.
WARNING:data_parallel_model:Please ensure model = ModelHelper(init_params=False)
WARNING:data_parallel_model:####################################
WARNING:data_parallel_model:
INFO:data_parallel_model:Model for GPU : 0
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 1
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 2
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Model for GPU : 3
INFO:model_helper:outputing rgb data
INFO:model_builder:creating r2plus1d, depth=34...
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 144
INFO:video_model:Number of middle filters: 230
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 288
INFO:video_model:Number of middle filters: 460
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Parameter update function not defined --> only forward
INFO:model_loader:copying conv1_middle_w to gpu_0/conv1_middle_w
INFO:model_loader:copying conv1_middle_spatbn_relu_s to gpu_0/conv1_middle_spatbn_relu_s
INFO:model_loader:copying conv1_middle_spatbn_relu_b to gpu_0/conv1_middle_spatbn_relu_b
INFO:model_loader:copying conv1_w to gpu_0/conv1_w
INFO:model_loader:copying conv1_spatbn_relu_s to gpu_0/conv1_spatbn_relu_s
INFO:model_loader:copying conv1_spatbn_relu_b to gpu_0/conv1_spatbn_relu_b
INFO:model_loader:copying comp_0_conv_1_middle_w to gpu_0/comp_0_conv_1_middle_w
INFO:model_loader:copying comp_0_spatbn_1_middle_s to gpu_0/comp_0_spatbn_1_middle_s
INFO:model_loader:copying comp_0_spatbn_1_middle_b to gpu_0/comp_0_spatbn_1_middle_b
INFO:model_loader:copying comp_0_conv_1_w to gpu_0/comp_0_conv_1_w
INFO:model_loader:copying comp_0_spatbn_1_s to gpu_0/comp_0_spatbn_1_s
INFO:model_loader:copying comp_0_spatbn_1_b to gpu_0/comp_0_spatbn_1_b
INFO:model_loader:copying comp_0_conv_2_middle_w to gpu_0/comp_0_conv_2_middle_w
INFO:model_loader:copying comp_0_spatbn_2_middle_s to gpu_0/comp_0_spatbn_2_middle_s
INFO:model_loader:copying comp_0_spatbn_2_middle_b to gpu_0/comp_0_spatbn_2_middle_b
INFO:model_loader:copying comp_0_conv_2_w to gpu_0/comp_0_conv_2_w
INFO:model_loader:copying comp_0_spatbn_2_s to gpu_0/comp_0_spatbn_2_s
INFO:model_loader:copying comp_0_spatbn_2_b to gpu_0/comp_0_spatbn_2_b
INFO:model_loader:copying comp_1_conv_1_middle_w to gpu_0/comp_1_conv_1_middle_w
INFO:model_loader:copying comp_1_spatbn_1_middle_s to gpu_0/comp_1_spatbn_1_middle_s
INFO:model_loader:copying comp_1_spatbn_1_middle_b to gpu_0/comp_1_spatbn_1_middle_b
INFO:model_loader:copying comp_1_conv_1_w to gpu_0/comp_1_conv_1_w
INFO:model_loader:copying comp_1_spatbn_1_s to gpu_0/comp_1_spatbn_1_s
INFO:model_loader:copying comp_1_spatbn_1_b to gpu_0/comp_1_spatbn_1_b
INFO:model_loader:copying comp_1_conv_2_middle_w to gpu_0/comp_1_conv_2_middle_w
INFO:model_loader:copying comp_1_spatbn_2_middle_s to gpu_0/comp_1_spatbn_2_middle_s
INFO:model_loader:copying comp_1_spatbn_2_middle_b to gpu_0/comp_1_spatbn_2_middle_b
INFO:model_loader:copying comp_1_conv_2_w to gpu_0/comp_1_conv_2_w
INFO:model_loader:copying comp_1_spatbn_2_s to gpu_0/comp_1_spatbn_2_s
INFO:model_loader:copying comp_1_spatbn_2_b to gpu_0/comp_1_spatbn_2_b
INFO:model_loader:copying comp_2_conv_1_middle_w to gpu_0/comp_2_conv_1_middle_w
INFO:model_loader:copying comp_2_spatbn_1_middle_s to gpu_0/comp_2_spatbn_1_middle_s
INFO:model_loader:copying comp_2_spatbn_1_middle_b to gpu_0/comp_2_spatbn_1_middle_b
INFO:model_loader:copying comp_2_conv_1_w to gpu_0/comp_2_conv_1_w
INFO:model_loader:copying comp_2_spatbn_1_s to gpu_0/comp_2_spatbn_1_s
INFO:model_loader:copying comp_2_spatbn_1_b to gpu_0/comp_2_spatbn_1_b
INFO:model_loader:copying comp_2_conv_2_middle_w to gpu_0/comp_2_conv_2_middle_w
INFO:model_loader:copying comp_2_spatbn_2_middle_s to gpu_0/comp_2_spatbn_2_middle_s
INFO:model_loader:copying comp_2_spatbn_2_middle_b to gpu_0/comp_2_spatbn_2_middle_b
INFO:model_loader:copying comp_2_conv_2_w to gpu_0/comp_2_conv_2_w
INFO:model_loader:copying comp_2_spatbn_2_s to gpu_0/comp_2_spatbn_2_s
INFO:model_loader:copying comp_2_spatbn_2_b to gpu_0/comp_2_spatbn_2_b
INFO:model_loader:copying comp_3_conv_1_middle_w to gpu_0/comp_3_conv_1_middle_w
INFO:model_loader:copying comp_3_spatbn_1_middle_s to gpu_0/comp_3_spatbn_1_middle_s
INFO:model_loader:copying comp_3_spatbn_1_middle_b to gpu_0/comp_3_spatbn_1_middle_b
INFO:model_loader:copying comp_3_conv_1_w to gpu_0/comp_3_conv_1_w
INFO:model_loader:copying comp_3_spatbn_1_s to gpu_0/comp_3_spatbn_1_s
INFO:model_loader:copying comp_3_spatbn_1_b to gpu_0/comp_3_spatbn_1_b
INFO:model_loader:copying comp_3_conv_2_middle_w to gpu_0/comp_3_conv_2_middle_w
INFO:model_loader:copying comp_3_spatbn_2_middle_s to gpu_0/comp_3_spatbn_2_middle_s
INFO:model_loader:copying comp_3_spatbn_2_middle_b to gpu_0/comp_3_spatbn_2_middle_b
INFO:model_loader:copying comp_3_conv_2_w to gpu_0/comp_3_conv_2_w
INFO:model_loader:copying comp_3_spatbn_2_s to gpu_0/comp_3_spatbn_2_s
INFO:model_loader:copying comp_3_spatbn_2_b to gpu_0/comp_3_spatbn_2_b
INFO:model_loader:copying shortcut_projection_3_w to gpu_0/shortcut_projection_3_w
INFO:model_loader:copying shortcut_projection_3_spatbn_s to gpu_0/shortcut_projection_3_spatbn_s
INFO:model_loader:copying shortcut_projection_3_spatbn_b to gpu_0/shortcut_projection_3_spatbn_b
INFO:model_loader:copying comp_4_conv_1_middle_w to gpu_0/comp_4_conv_1_middle_w
INFO:model_loader:copying comp_4_spatbn_1_middle_s to gpu_0/comp_4_spatbn_1_middle_s
INFO:model_loader:copying comp_4_spatbn_1_middle_b to gpu_0/comp_4_spatbn_1_middle_b
INFO:model_loader:copying comp_4_conv_1_w to gpu_0/comp_4_conv_1_w
INFO:model_loader:copying comp_4_spatbn_1_s to gpu_0/comp_4_spatbn_1_s
INFO:model_loader:copying comp_4_spatbn_1_b to gpu_0/comp_4_spatbn_1_b
INFO:model_loader:copying comp_4_conv_2_middle_w to gpu_0/comp_4_conv_2_middle_w
INFO:model_loader:copying comp_4_spatbn_2_middle_s to gpu_0/comp_4_spatbn_2_middle_s
INFO:model_loader:copying comp_4_spatbn_2_middle_b to gpu_0/comp_4_spatbn_2_middle_b
INFO:model_loader:copying comp_4_conv_2_w to gpu_0/comp_4_conv_2_w
INFO:model_loader:copying comp_4_spatbn_2_s to gpu_0/comp_4_spatbn_2_s
INFO:model_loader:copying comp_4_spatbn_2_b to gpu_0/comp_4_spatbn_2_b
INFO:model_loader:copying comp_5_conv_1_middle_w to gpu_0/comp_5_conv_1_middle_w
INFO:model_loader:copying comp_5_spatbn_1_middle_s to gpu_0/comp_5_spatbn_1_middle_s
INFO:model_loader:copying comp_5_spatbn_1_middle_b to gpu_0/comp_5_spatbn_1_middle_b
INFO:model_loader:copying comp_5_conv_1_w to gpu_0/comp_5_conv_1_w
INFO:model_loader:copying comp_5_spatbn_1_s to gpu_0/comp_5_spatbn_1_s
INFO:model_loader:copying comp_5_spatbn_1_b to gpu_0/comp_5_spatbn_1_b
INFO:model_loader:copying comp_5_conv_2_middle_w to gpu_0/comp_5_conv_2_middle_w
INFO:model_loader:copying comp_5_spatbn_2_middle_s to gpu_0/comp_5_spatbn_2_middle_s
INFO:model_loader:copying comp_5_spatbn_2_middle_b to gpu_0/comp_5_spatbn_2_middle_b
INFO:model_loader:copying comp_5_conv_2_w to gpu_0/comp_5_conv_2_w
INFO:model_loader:copying comp_5_spatbn_2_s to gpu_0/comp_5_spatbn_2_s
INFO:model_loader:copying comp_5_spatbn_2_b to gpu_0/comp_5_spatbn_2_b
INFO:model_loader:copying comp_6_conv_1_middle_w to gpu_0/comp_6_conv_1_middle_w
INFO:model_loader:copying comp_6_spatbn_1_middle_s to gpu_0/comp_6_spatbn_1_middle_s
INFO:model_loader:copying comp_6_spatbn_1_middle_b to gpu_0/comp_6_spatbn_1_middle_b
INFO:model_loader:copying comp_6_conv_1_w to gpu_0/comp_6_conv_1_w
INFO:model_loader:copying comp_6_spatbn_1_s to gpu_0/comp_6_spatbn_1_s
INFO:model_loader:copying comp_6_spatbn_1_b to gpu_0/comp_6_spatbn_1_b
INFO:model_loader:copying comp_6_conv_2_middle_w to gpu_0/comp_6_conv_2_middle_w
INFO:model_loader:copying comp_6_spatbn_2_middle_s to gpu_0/comp_6_spatbn_2_middle_s
INFO:model_loader:copying comp_6_spatbn_2_middle_b to gpu_0/comp_6_spatbn_2_middle_b
INFO:model_loader:copying comp_6_conv_2_w to gpu_0/comp_6_conv_2_w
INFO:model_loader:copying comp_6_spatbn_2_s to gpu_0/comp_6_spatbn_2_s
INFO:model_loader:copying comp_6_spatbn_2_b to gpu_0/comp_6_spatbn_2_b
INFO:model_loader:copying comp_7_conv_1_middle_w to gpu_0/comp_7_conv_1_middle_w
INFO:model_loader:copying comp_7_spatbn_1_middle_s to gpu_0/comp_7_spatbn_1_middle_s
INFO:model_loader:copying comp_7_spatbn_1_middle_b to gpu_0/comp_7_spatbn_1_middle_b
INFO:model_loader:copying comp_7_conv_1_w to gpu_0/comp_7_conv_1_w
INFO:model_loader:copying comp_7_spatbn_1_s to gpu_0/comp_7_spatbn_1_s
INFO:model_loader:copying comp_7_spatbn_1_b to gpu_0/comp_7_spatbn_1_b
INFO:model_loader:copying comp_7_conv_2_middle_w to gpu_0/comp_7_conv_2_middle_w
INFO:model_loader:copying comp_7_spatbn_2_middle_s to gpu_0/comp_7_spatbn_2_middle_s
INFO:model_loader:copying comp_7_spatbn_2_middle_b to gpu_0/comp_7_spatbn_2_middle_b
INFO:model_loader:copying comp_7_conv_2_w to gpu_0/comp_7_conv_2_w
INFO:model_loader:copying comp_7_spatbn_2_s to gpu_0/comp_7_spatbn_2_s
INFO:model_loader:copying comp_7_spatbn_2_b to gpu_0/comp_7_spatbn_2_b
INFO:model_loader:copying shortcut_projection_7_w to gpu_0/shortcut_projection_7_w
INFO:model_loader:copying shortcut_projection_7_spatbn_s to gpu_0/shortcut_projection_7_spatbn_s
INFO:model_loader:copying shortcut_projection_7_spatbn_b to gpu_0/shortcut_projection_7_spatbn_b
INFO:model_loader:copying comp_8_conv_1_middle_w to gpu_0/comp_8_conv_1_middle_w
INFO:model_loader:copying comp_8_spatbn_1_middle_s to gpu_0/comp_8_spatbn_1_middle_s
INFO:model_loader:copying comp_8_spatbn_1_middle_b to gpu_0/comp_8_spatbn_1_middle_b
INFO:model_loader:copying comp_8_conv_1_w to gpu_0/comp_8_conv_1_w
INFO:model_loader:copying comp_8_spatbn_1_s to gpu_0/comp_8_spatbn_1_s
INFO:model_loader:copying comp_8_spatbn_1_b to gpu_0/comp_8_spatbn_1_b
INFO:model_loader:copying comp_8_conv_2_middle_w to gpu_0/comp_8_conv_2_middle_w
INFO:model_loader:copying comp_8_spatbn_2_middle_s to gpu_0/comp_8_spatbn_2_middle_s
INFO:model_loader:copying comp_8_spatbn_2_middle_b to gpu_0/comp_8_spatbn_2_middle_b
INFO:model_loader:copying comp_8_conv_2_w to gpu_0/comp_8_conv_2_w
INFO:model_loader:copying comp_8_spatbn_2_s to gpu_0/comp_8_spatbn_2_s
INFO:model_loader:copying comp_8_spatbn_2_b to gpu_0/comp_8_spatbn_2_b
INFO:model_loader:copying comp_9_conv_1_middle_w to gpu_0/comp_9_conv_1_middle_w
INFO:model_loader:copying comp_9_spatbn_1_middle_s to gpu_0/comp_9_spatbn_1_middle_s
INFO:model_loader:copying comp_9_spatbn_1_middle_b to gpu_0/comp_9_spatbn_1_middle_b
INFO:model_loader:copying comp_9_conv_1_w to gpu_0/comp_9_conv_1_w
INFO:model_loader:copying comp_9_spatbn_1_s to gpu_0/comp_9_spatbn_1_s
INFO:model_loader:copying comp_9_spatbn_1_b to gpu_0/comp_9_spatbn_1_b
INFO:model_loader:copying comp_9_conv_2_middle_w to gpu_0/comp_9_conv_2_middle_w
INFO:model_loader:copying comp_9_spatbn_2_middle_s to gpu_0/comp_9_spatbn_2_middle_s
INFO:model_loader:copying comp_9_spatbn_2_middle_b to gpu_0/comp_9_spatbn_2_middle_b
INFO:model_loader:copying comp_9_conv_2_w to gpu_0/comp_9_conv_2_w
INFO:model_loader:copying comp_9_spatbn_2_s to gpu_0/comp_9_spatbn_2_s
INFO:model_loader:copying comp_9_spatbn_2_b to gpu_0/comp_9_spatbn_2_b
INFO:model_loader:copying comp_10_conv_1_middle_w to gpu_0/comp_10_conv_1_middle_w
INFO:model_loader:copying comp_10_spatbn_1_middle_s to gpu_0/comp_10_spatbn_1_middle_s
INFO:model_loader:copying comp_10_spatbn_1_middle_b to gpu_0/comp_10_spatbn_1_middle_b
INFO:model_loader:copying comp_10_conv_1_w to gpu_0/comp_10_conv_1_w
INFO:model_loader:copying comp_10_spatbn_1_s to gpu_0/comp_10_spatbn_1_s
INFO:model_loader:copying comp_10_spatbn_1_b to gpu_0/comp_10_spatbn_1_b
INFO:model_loader:copying comp_10_conv_2_middle_w to gpu_0/comp_10_conv_2_middle_w
INFO:model_loader:copying comp_10_spatbn_2_middle_s to gpu_0/comp_10_spatbn_2_middle_s
INFO:model_loader:copying comp_10_spatbn_2_middle_b to gpu_0/comp_10_spatbn_2_middle_b
INFO:model_loader:copying comp_10_conv_2_w to gpu_0/comp_10_conv_2_w
INFO:model_loader:copying comp_10_spatbn_2_s to gpu_0/comp_10_spatbn_2_s
INFO:model_loader:copying comp_10_spatbn_2_b to gpu_0/comp_10_spatbn_2_b
INFO:model_loader:copying comp_11_conv_1_middle_w to gpu_0/comp_11_conv_1_middle_w
INFO:model_loader:copying comp_11_spatbn_1_middle_s to gpu_0/comp_11_spatbn_1_middle_s
INFO:model_loader:copying comp_11_spatbn_1_middle_b to gpu_0/comp_11_spatbn_1_middle_b
INFO:model_loader:copying comp_11_conv_1_w to gpu_0/comp_11_conv_1_w
INFO:model_loader:copying comp_11_spatbn_1_s to gpu_0/comp_11_spatbn_1_s
INFO:model_loader:copying comp_11_spatbn_1_b to gpu_0/comp_11_spatbn_1_b
INFO:model_loader:copying comp_11_conv_2_middle_w to gpu_0/comp_11_conv_2_middle_w
INFO:model_loader:copying comp_11_spatbn_2_middle_s to gpu_0/comp_11_spatbn_2_middle_s
INFO:model_loader:copying comp_11_spatbn_2_middle_b to gpu_0/comp_11_spatbn_2_middle_b
INFO:model_loader:copying comp_11_conv_2_w to gpu_0/comp_11_conv_2_w
INFO:model_loader:copying comp_11_spatbn_2_s to gpu_0/comp_11_spatbn_2_s
INFO:model_loader:copying comp_11_spatbn_2_b to gpu_0/comp_11_spatbn_2_b
INFO:model_loader:copying comp_12_conv_1_middle_w to gpu_0/comp_12_conv_1_middle_w
INFO:model_loader:copying comp_12_spatbn_1_middle_s to gpu_0/comp_12_spatbn_1_middle_s
INFO:model_loader:copying comp_12_spatbn_1_middle_b to gpu_0/comp_12_spatbn_1_middle_b
INFO:model_loader:copying comp_12_conv_1_w to gpu_0/comp_12_conv_1_w
INFO:model_loader:copying comp_12_spatbn_1_s to gpu_0/comp_12_spatbn_1_s
INFO:model_loader:copying comp_12_spatbn_1_b to gpu_0/comp_12_spatbn_1_b
INFO:model_loader:copying comp_12_conv_2_middle_w to gpu_0/comp_12_conv_2_middle_w
INFO:model_loader:copying comp_12_spatbn_2_middle_s to gpu_0/comp_12_spatbn_2_middle_s
INFO:model_loader:copying comp_12_spatbn_2_middle_b to gpu_0/comp_12_spatbn_2_middle_b
INFO:model_loader:copying comp_12_conv_2_w to gpu_0/comp_12_conv_2_w
INFO:model_loader:copying comp_12_spatbn_2_s to gpu_0/comp_12_spatbn_2_s
INFO:model_loader:copying comp_12_spatbn_2_b to gpu_0/comp_12_spatbn_2_b
INFO:model_loader:copying comp_13_conv_1_middle_w to gpu_0/comp_13_conv_1_middle_w
INFO:model_loader:copying comp_13_spatbn_1_middle_s to gpu_0/comp_13_spatbn_1_middle_s
INFO:model_loader:copying comp_13_spatbn_1_middle_b to gpu_0/comp_13_spatbn_1_middle_b
INFO:model_loader:copying comp_13_conv_1_w to gpu_0/comp_13_conv_1_w
INFO:model_loader:copying comp_13_spatbn_1_s to gpu_0/comp_13_spatbn_1_s
INFO:model_loader:copying comp_13_spatbn_1_b to gpu_0/comp_13_spatbn_1_b
INFO:model_loader:copying comp_13_conv_2_middle_w to gpu_0/comp_13_conv_2_middle_w
INFO:model_loader:copying comp_13_spatbn_2_middle_s to gpu_0/comp_13_spatbn_2_middle_s
INFO:model_loader:copying comp_13_spatbn_2_middle_b to gpu_0/comp_13_spatbn_2_middle_b
INFO:model_loader:copying comp_13_conv_2_w to gpu_0/comp_13_conv_2_w
INFO:model_loader:copying comp_13_spatbn_2_s to gpu_0/comp_13_spatbn_2_s
INFO:model_loader:copying comp_13_spatbn_2_b to gpu_0/comp_13_spatbn_2_b
INFO:model_loader:copying shortcut_projection_13_w to gpu_0/shortcut_projection_13_w
INFO:model_loader:copying shortcut_projection_13_spatbn_s to gpu_0/shortcut_projection_13_spatbn_s
INFO:model_loader:copying shortcut_projection_13_spatbn_b to gpu_0/shortcut_projection_13_spatbn_b
INFO:model_loader:copying comp_14_conv_1_middle_w to gpu_0/comp_14_conv_1_middle_w
INFO:model_loader:copying comp_14_spatbn_1_middle_s to gpu_0/comp_14_spatbn_1_middle_s
INFO:model_loader:copying comp_14_spatbn_1_middle_b to gpu_0/comp_14_spatbn_1_middle_b
INFO:model_loader:copying comp_14_conv_1_w to gpu_0/comp_14_conv_1_w
INFO:model_loader:copying comp_14_spatbn_1_s to gpu_0/comp_14_spatbn_1_s
INFO:model_loader:copying comp_14_spatbn_1_b to gpu_0/comp_14_spatbn_1_b
INFO:model_loader:copying comp_14_conv_2_middle_w to gpu_0/comp_14_conv_2_middle_w
INFO:model_loader:copying comp_14_spatbn_2_middle_s to gpu_0/comp_14_spatbn_2_middle_s
INFO:model_loader:copying comp_14_spatbn_2_middle_b to gpu_0/comp_14_spatbn_2_middle_b
INFO:model_loader:copying comp_14_conv_2_w to gpu_0/comp_14_conv_2_w
INFO:model_loader:copying comp_14_spatbn_2_s to gpu_0/comp_14_spatbn_2_s
INFO:model_loader:copying comp_14_spatbn_2_b to gpu_0/comp_14_spatbn_2_b
INFO:model_loader:copying comp_15_conv_1_middle_w to gpu_0/comp_15_conv_1_middle_w
INFO:model_loader:copying comp_15_spatbn_1_middle_s to gpu_0/comp_15_spatbn_1_middle_s
INFO:model_loader:copying comp_15_spatbn_1_middle_b to gpu_0/comp_15_spatbn_1_middle_b
INFO:model_loader:copying comp_15_conv_1_w to gpu_0/comp_15_conv_1_w
INFO:model_loader:copying comp_15_spatbn_1_s to gpu_0/comp_15_spatbn_1_s
INFO:model_loader:copying comp_15_spatbn_1_b to gpu_0/comp_15_spatbn_1_b
INFO:model_loader:copying comp_15_conv_2_middle_w to gpu_0/comp_15_conv_2_middle_w
INFO:model_loader:copying comp_15_spatbn_2_middle_s to gpu_0/comp_15_spatbn_2_middle_s
INFO:model_loader:copying comp_15_spatbn_2_middle_b to gpu_0/comp_15_spatbn_2_middle_b
INFO:model_loader:copying comp_15_conv_2_w to gpu_0/comp_15_conv_2_w
INFO:model_loader:copying comp_15_spatbn_2_s to gpu_0/comp_15_spatbn_2_s
INFO:model_loader:copying comp_15_spatbn_2_b to gpu_0/comp_15_spatbn_2_b
INFO:model_loader:last_out_L101_w not found
INFO:model_loader:last_out_L101_b not found
INFO:model_loader:copying conv1_middle_spatbn_relu_rm to gpu_0/conv1_middle_spatbn_relu_rm
INFO:model_loader:copying conv1_middle_spatbn_relu_riv to gpu_0/conv1_middle_spatbn_relu_riv
INFO:model_loader:copying conv1_spatbn_relu_rm to gpu_0/conv1_spatbn_relu_rm
INFO:model_loader:copying conv1_spatbn_relu_riv to gpu_0/conv1_spatbn_relu_riv
INFO:model_loader:copying comp_0_spatbn_1_middle_rm to gpu_0/comp_0_spatbn_1_middle_rm
INFO:model_loader:copying comp_0_spatbn_1_middle_riv to gpu_0/comp_0_spatbn_1_middle_riv
INFO:model_loader:copying comp_0_spatbn_1_rm to gpu_0/comp_0_spatbn_1_rm
INFO:model_loader:copying comp_0_spatbn_1_riv to gpu_0/comp_0_spatbn_1_riv
INFO:model_loader:copying comp_0_spatbn_2_middle_rm to gpu_0/comp_0_spatbn_2_middle_rm
INFO:model_loader:copying comp_0_spatbn_2_middle_riv to gpu_0/comp_0_spatbn_2_middle_riv
INFO:model_loader:copying comp_0_spatbn_2_rm to gpu_0/comp_0_spatbn_2_rm
INFO:model_loader:copying comp_0_spatbn_2_riv to gpu_0/comp_0_spatbn_2_riv
INFO:model_loader:copying comp_1_spatbn_1_middle_rm to gpu_0/comp_1_spatbn_1_middle_rm
INFO:model_loader:copying comp_1_spatbn_1_middle_riv to gpu_0/comp_1_spatbn_1_middle_riv
INFO:model_loader:copying comp_1_spatbn_1_rm to gpu_0/comp_1_spatbn_1_rm
INFO:model_loader:copying comp_1_spatbn_1_riv to gpu_0/comp_1_spatbn_1_riv
INFO:model_loader:copying comp_1_spatbn_2_middle_rm to gpu_0/comp_1_spatbn_2_middle_rm
INFO:model_loader:copying comp_1_spatbn_2_middle_riv to gpu_0/comp_1_spatbn_2_middle_riv
INFO:model_loader:copying comp_1_spatbn_2_rm to gpu_0/comp_1_spatbn_2_rm
INFO:model_loader:copying comp_1_spatbn_2_riv to gpu_0/comp_1_spatbn_2_riv
INFO:model_loader:copying comp_2_spatbn_1_middle_rm to gpu_0/comp_2_spatbn_1_middle_rm
INFO:model_loader:copying comp_2_spatbn_1_middle_riv to gpu_0/comp_2_spatbn_1_middle_riv
INFO:model_loader:copying comp_2_spatbn_1_rm to gpu_0/comp_2_spatbn_1_rm
INFO:model_loader:copying comp_2_spatbn_1_riv to gpu_0/comp_2_spatbn_1_riv
INFO:model_loader:copying comp_2_spatbn_2_middle_rm to gpu_0/comp_2_spatbn_2_middle_rm
INFO:model_loader:copying comp_2_spatbn_2_middle_riv to gpu_0/comp_2_spatbn_2_middle_riv
INFO:model_loader:copying comp_2_spatbn_2_rm to gpu_0/comp_2_spatbn_2_rm
INFO:model_loader:copying comp_2_spatbn_2_riv to gpu_0/comp_2_spatbn_2_riv
INFO:model_loader:copying comp_3_spatbn_1_middle_rm to gpu_0/comp_3_spatbn_1_middle_rm
INFO:model_loader:copying comp_3_spatbn_1_middle_riv to gpu_0/comp_3_spatbn_1_middle_riv
INFO:model_loader:copying comp_3_spatbn_1_rm to gpu_0/comp_3_spatbn_1_rm
INFO:model_loader:copying comp_3_spatbn_1_riv to gpu_0/comp_3_spatbn_1_riv
INFO:model_loader:copying comp_3_spatbn_2_middle_rm to gpu_0/comp_3_spatbn_2_middle_rm
INFO:model_loader:copying comp_3_spatbn_2_middle_riv to gpu_0/comp_3_spatbn_2_middle_riv
INFO:model_loader:copying comp_3_spatbn_2_rm to gpu_0/comp_3_spatbn_2_rm
INFO:model_loader:copying comp_3_spatbn_2_riv to gpu_0/comp_3_spatbn_2_riv
INFO:model_loader:copying shortcut_projection_3_spatbn_rm to gpu_0/shortcut_projection_3_spatbn_rm
INFO:model_loader:copying shortcut_projection_3_spatbn_riv to gpu_0/shortcut_projection_3_spatbn_riv
INFO:model_loader:copying comp_4_spatbn_1_middle_rm to gpu_0/comp_4_spatbn_1_middle_rm
INFO:model_loader:copying comp_4_spatbn_1_middle_riv to gpu_0/comp_4_spatbn_1_middle_riv
INFO:model_loader:copying comp_4_spatbn_1_rm to gpu_0/comp_4_spatbn_1_rm
INFO:model_loader:copying comp_4_spatbn_1_riv to gpu_0/comp_4_spatbn_1_riv
INFO:model_loader:copying comp_4_spatbn_2_middle_rm to gpu_0/comp_4_spatbn_2_middle_rm
INFO:model_loader:copying comp_4_spatbn_2_middle_riv to gpu_0/comp_4_spatbn_2_middle_riv
INFO:model_loader:copying comp_4_spatbn_2_rm to gpu_0/comp_4_spatbn_2_rm
INFO:model_loader:copying comp_4_spatbn_2_riv to gpu_0/comp_4_spatbn_2_riv
INFO:model_loader:copying comp_5_spatbn_1_middle_rm to gpu_0/comp_5_spatbn_1_middle_rm
INFO:model_loader:copying comp_5_spatbn_1_middle_riv to gpu_0/comp_5_spatbn_1_middle_riv
INFO:model_loader:copying comp_5_spatbn_1_rm to gpu_0/comp_5_spatbn_1_rm
INFO:model_loader:copying comp_5_spatbn_1_riv to gpu_0/comp_5_spatbn_1_riv
INFO:model_loader:copying comp_5_spatbn_2_middle_rm to gpu_0/comp_5_spatbn_2_middle_rm
INFO:model_loader:copying comp_5_spatbn_2_middle_riv to gpu_0/comp_5_spatbn_2_middle_riv
INFO:model_loader:copying comp_5_spatbn_2_rm to gpu_0/comp_5_spatbn_2_rm
INFO:model_loader:copying comp_5_spatbn_2_riv to gpu_0/comp_5_spatbn_2_riv
INFO:model_loader:copying comp_6_spatbn_1_middle_rm to gpu_0/comp_6_spatbn_1_middle_rm
INFO:model_loader:copying comp_6_spatbn_1_middle_riv to gpu_0/comp_6_spatbn_1_middle_riv
INFO:model_loader:copying comp_6_spatbn_1_rm to gpu_0/comp_6_spatbn_1_rm
INFO:model_loader:copying comp_6_spatbn_1_riv to gpu_0/comp_6_spatbn_1_riv
INFO:model_loader:copying comp_6_spatbn_2_middle_rm to gpu_0/comp_6_spatbn_2_middle_rm
INFO:model_loader:copying comp_6_spatbn_2_middle_riv to gpu_0/comp_6_spatbn_2_middle_riv
INFO:model_loader:copying comp_6_spatbn_2_rm to gpu_0/comp_6_spatbn_2_rm
INFO:model_loader:copying comp_6_spatbn_2_riv to gpu_0/comp_6_spatbn_2_riv
INFO:model_loader:copying comp_7_spatbn_1_middle_rm to gpu_0/comp_7_spatbn_1_middle_rm
INFO:model_loader:copying comp_7_spatbn_1_middle_riv to gpu_0/comp_7_spatbn_1_middle_riv
INFO:model_loader:copying comp_7_spatbn_1_rm to gpu_0/comp_7_spatbn_1_rm
INFO:model_loader:copying comp_7_spatbn_1_riv to gpu_0/comp_7_spatbn_1_riv
INFO:model_loader:copying comp_7_spatbn_2_middle_rm to gpu_0/comp_7_spatbn_2_middle_rm
INFO:model_loader:copying comp_7_spatbn_2_middle_riv to gpu_0/comp_7_spatbn_2_middle_riv
INFO:model_loader:copying comp_7_spatbn_2_rm to gpu_0/comp_7_spatbn_2_rm
INFO:model_loader:copying comp_7_spatbn_2_riv to gpu_0/comp_7_spatbn_2_riv
INFO:model_loader:copying shortcut_projection_7_spatbn_rm to gpu_0/shortcut_projection_7_spatbn_rm
INFO:model_loader:copying shortcut_projection_7_spatbn_riv to gpu_0/shortcut_projection_7_spatbn_riv
INFO:model_loader:copying comp_8_spatbn_1_middle_rm to gpu_0/comp_8_spatbn_1_middle_rm
INFO:model_loader:copying comp_8_spatbn_1_middle_riv to gpu_0/comp_8_spatbn_1_middle_riv
INFO:model_loader:copying comp_8_spatbn_1_rm to gpu_0/comp_8_spatbn_1_rm
INFO:model_loader:copying comp_8_spatbn_1_riv to gpu_0/comp_8_spatbn_1_riv
INFO:model_loader:copying comp_8_spatbn_2_middle_rm to gpu_0/comp_8_spatbn_2_middle_rm
INFO:model_loader:copying comp_8_spatbn_2_middle_riv to gpu_0/comp_8_spatbn_2_middle_riv
INFO:model_loader:copying comp_8_spatbn_2_rm to gpu_0/comp_8_spatbn_2_rm
INFO:model_loader:copying comp_8_spatbn_2_riv to gpu_0/comp_8_spatbn_2_riv
INFO:model_loader:copying comp_9_spatbn_1_middle_rm to gpu_0/comp_9_spatbn_1_middle_rm
INFO:model_loader:copying comp_9_spatbn_1_middle_riv to gpu_0/comp_9_spatbn_1_middle_riv
INFO:model_loader:copying comp_9_spatbn_1_rm to gpu_0/comp_9_spatbn_1_rm
INFO:model_loader:copying comp_9_spatbn_1_riv to gpu_0/comp_9_spatbn_1_riv
INFO:model_loader:copying comp_9_spatbn_2_middle_rm to gpu_0/comp_9_spatbn_2_middle_rm
INFO:model_loader:copying comp_9_spatbn_2_middle_riv to gpu_0/comp_9_spatbn_2_middle_riv
INFO:model_loader:copying comp_9_spatbn_2_rm to gpu_0/comp_9_spatbn_2_rm
INFO:model_loader:copying comp_9_spatbn_2_riv to gpu_0/comp_9_spatbn_2_riv
INFO:model_loader:copying comp_10_spatbn_1_middle_rm to gpu_0/comp_10_spatbn_1_middle_rm
INFO:model_loader:copying comp_10_spatbn_1_middle_riv to gpu_0/comp_10_spatbn_1_middle_riv
INFO:model_loader:copying comp_10_spatbn_1_rm to gpu_0/comp_10_spatbn_1_rm
INFO:model_loader:copying comp_10_spatbn_1_riv to gpu_0/comp_10_spatbn_1_riv
INFO:model_loader:copying comp_10_spatbn_2_middle_rm to gpu_0/comp_10_spatbn_2_middle_rm
INFO:model_loader:copying comp_10_spatbn_2_middle_riv to gpu_0/comp_10_spatbn_2_middle_riv
INFO:model_loader:copying comp_10_spatbn_2_rm to gpu_0/comp_10_spatbn_2_rm
INFO:model_loader:copying comp_10_spatbn_2_riv to gpu_0/comp_10_spatbn_2_riv
INFO:model_loader:copying comp_11_spatbn_1_middle_rm to gpu_0/comp_11_spatbn_1_middle_rm
INFO:model_loader:copying comp_11_spatbn_1_middle_riv to gpu_0/comp_11_spatbn_1_middle_riv
INFO:model_loader:copying comp_11_spatbn_1_rm to gpu_0/comp_11_spatbn_1_rm
INFO:model_loader:copying comp_11_spatbn_1_riv to gpu_0/comp_11_spatbn_1_riv
INFO:model_loader:copying comp_11_spatbn_2_middle_rm to gpu_0/comp_11_spatbn_2_middle_rm
INFO:model_loader:copying comp_11_spatbn_2_middle_riv to gpu_0/comp_11_spatbn_2_middle_riv
INFO:model_loader:copying comp_11_spatbn_2_rm to gpu_0/comp_11_spatbn_2_rm
INFO:model_loader:copying comp_11_spatbn_2_riv to gpu_0/comp_11_spatbn_2_riv
INFO:model_loader:copying comp_12_spatbn_1_middle_rm to gpu_0/comp_12_spatbn_1_middle_rm
INFO:model_loader:copying comp_12_spatbn_1_middle_riv to gpu_0/comp_12_spatbn_1_middle_riv
INFO:model_loader:copying comp_12_spatbn_1_rm to gpu_0/comp_12_spatbn_1_rm
INFO:model_loader:copying comp_12_spatbn_1_riv to gpu_0/comp_12_spatbn_1_riv
INFO:model_loader:copying comp_12_spatbn_2_middle_rm to gpu_0/comp_12_spatbn_2_middle_rm
INFO:model_loader:copying comp_12_spatbn_2_middle_riv to gpu_0/comp_12_spatbn_2_middle_riv
INFO:model_loader:copying comp_12_spatbn_2_rm to gpu_0/comp_12_spatbn_2_rm
INFO:model_loader:copying comp_12_spatbn_2_riv to gpu_0/comp_12_spatbn_2_riv
INFO:model_loader:copying comp_13_spatbn_1_middle_rm to gpu_0/comp_13_spatbn_1_middle_rm
INFO:model_loader:copying comp_13_spatbn_1_middle_riv to gpu_0/comp_13_spatbn_1_middle_riv
INFO:model_loader:copying comp_13_spatbn_1_rm to gpu_0/comp_13_spatbn_1_rm
INFO:model_loader:copying comp_13_spatbn_1_riv to gpu_0/comp_13_spatbn_1_riv
INFO:model_loader:copying comp_13_spatbn_2_middle_rm to gpu_0/comp_13_spatbn_2_middle_rm
INFO:model_loader:copying comp_13_spatbn_2_middle_riv to gpu_0/comp_13_spatbn_2_middle_riv
INFO:model_loader:copying comp_13_spatbn_2_rm to gpu_0/comp_13_spatbn_2_rm
INFO:model_loader:copying comp_13_spatbn_2_riv to gpu_0/comp_13_spatbn_2_riv
INFO:model_loader:copying shortcut_projection_13_spatbn_rm to gpu_0/shortcut_projection_13_spatbn_rm
INFO:model_loader:copying shortcut_projection_13_spatbn_riv to gpu_0/shortcut_projection_13_spatbn_riv
INFO:model_loader:copying comp_14_spatbn_1_middle_rm to gpu_0/comp_14_spatbn_1_middle_rm
INFO:model_loader:copying comp_14_spatbn_1_middle_riv to gpu_0/comp_14_spatbn_1_middle_riv
INFO:model_loader:copying comp_14_spatbn_1_rm to gpu_0/comp_14_spatbn_1_rm
INFO:model_loader:copying comp_14_spatbn_1_riv to gpu_0/comp_14_spatbn_1_riv
INFO:model_loader:copying comp_14_spatbn_2_middle_rm to gpu_0/comp_14_spatbn_2_middle_rm
INFO:model_loader:copying comp_14_spatbn_2_middle_riv to gpu_0/comp_14_spatbn_2_middle_riv
INFO:model_loader:copying comp_14_spatbn_2_rm to gpu_0/comp_14_spatbn_2_rm
INFO:model_loader:copying comp_14_spatbn_2_riv to gpu_0/comp_14_spatbn_2_riv
INFO:model_loader:copying comp_15_spatbn_1_middle_rm to gpu_0/comp_15_spatbn_1_middle_rm
INFO:model_loader:copying comp_15_spatbn_1_middle_riv to gpu_0/comp_15_spatbn_1_middle_riv
INFO:model_loader:copying comp_15_spatbn_1_rm to gpu_0/comp_15_spatbn_1_rm
INFO:model_loader:copying comp_15_spatbn_1_riv to gpu_0/comp_15_spatbn_1_riv
INFO:model_loader:copying comp_15_spatbn_2_middle_rm to gpu_0/comp_15_spatbn_2_middle_rm
INFO:model_loader:copying comp_15_spatbn_2_middle_riv to gpu_0/comp_15_spatbn_2_middle_riv
INFO:model_loader:copying comp_15_spatbn_2_rm to gpu_0/comp_15_spatbn_2_rm
INFO:model_loader:copying comp_15_spatbn_2_riv to gpu_0/comp_15_spatbn_2_riv
INFO:data_parallel_model:Creating checkpoint synchronization net
INFO:data_parallel_model:Run checkpoint net
INFO:train_net:Starting epoch 0/8
*** Aborted at 1528652698 (unix time) try "date -d @1528652698" if you are using GNU date ***
PC: @ 0x7fb7def1add6 __memcpy_ssse3_back
*** SIGSEGV (@0x7fb7f1fcd4d0) received by PID 260 (TID 0x7db6517f2700) from PID 18446744073474462928; stack trace: ***
@ 0x7fb7df8aa6d0 (unknown)
@ 0x7fb7def1add6 __memcpy_ssse3_back
@ 0x7fb773a3638e (unknown)
@ 0x7fb773a363fd (unknown)
@ 0x7fb7d97b725a caffe2::db::LMDBCursor::key()
@ 0x7fb7c1a76898 caffe2::VideoInputOp<>::Prefetch()
@ 0x7fb7c1a666e6 caffe2::PrefetchOperator<>::PrefetchWorker()
@ 0x7fb773a2c070 (unknown)
@ 0x7fb7df8a2e25 start_thread
@ 0x7fb7deec3bad __clone
Segmentation fault (core dumped)

Method VideoInput is not a registered operator

I use Pre-Built Binaries to install Caffe2 (conda install -c caffe2 caffe2-cuda8.0-cudnn7). During feature extraction, I get an error as following. Why is VideoInput not registered?

Traceback (most recent call last):
File "tools/extract_features.py", line 292, in
main()
File "tools/extract_features.py", line 287, in main
ExtractFeatures(args)
File "tools/extract_features.py", line 118, in ExtractFeatures
devices=gpus,
File "/home/dujiajun/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 32, in Parallelize_GPU
Parallelize(*args, **kwargs)
File "/home/dujiajun/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 208, in Parallelize
input_builder_fun(model_helper_obj)
File "tools/extract_features.py", line 94, in input_fn
use_local_file=args.use_local_file,
File "/home/dujiajun/R2Plus1D/lib/utils/model_helper.py", line 120, in AddVideoInput
data, label, video_id = model.net.VideoInput(
File "/home/dujiajun/anaconda2/envs/caffe2/lib/python2.7/site-packages/caffe2/python/core.py", line 2067, in getattr
",".join(workspace.C.nearby_opnames(op_type)) + ']'
AttributeError: Method VideoInput is not a registered operator. Did you mean: []

Error executing sh scripts/extract_feature_hmdb51.sh, Cannot open file: r2plus1d_8.mdl

INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 576
INFO:video_model:Number of middle filters: 921
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:video_model:Number of middle filters: 1152
INFO:data_parallel_model:Parameter update function not defined --> only forward
INFO:model_helper:Loading path: r2plus1d_8.mdl
Traceback (most recent call last):
File "tools/extract_features.py", line 328, in
main()
File "tools/extract_features.py", line 323, in main
ExtractFeatures(args)
File "tools/extract_features.py", line 142, in ExtractFeatures
model_helper.LoadModel(args.load_model_path, args.db_type)
File "/home/baihao/R2Plus1D-master/tools/utils/model_helper.py", line 90, in LoadModel
meta_net_def = pred_exp.load_from_db(path, dbtype)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/predictor/predictor_exporter.py", line 211, in load_from_db
assert workspace.RunOperatorOnce(create_db), (
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 165, in RunOperatorOnce
return C.run_operator_once(StringifyProto(operator))
RuntimeError: [enforce fail at db.cc:140] file_. Cannot open file: r2plus1d_8.mdl Error from operator:
output: "!!PREDICTOR_DBREADER" name: "" type: "CreateDB" arg { name: "db_type" s: "minidb" } arg { name: "db" s: "r2plus1d_8.mdl" }

extract_features.py produces feature that is all the same for all the videos

I am using my own dataset and I have made the database as you said. However when I tried to extract features using extract_features.py, I got confused results.
I printed the features and found that the softmax layer gives the same number,
like
(Pdb) activations
array([[0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111],
[0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111],
[0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111],
[0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111,
0.01111111, 0.01111111, 0.01111111, 0.01111111, 0.01111111]],
dtype=float32)
since the number of labels I use is 90, the feature I get seems like an initialization. I am so confused. Could you give me some advice?

SVM with R(2+1)D Features

Hi @dutran,

I'm trying to reproduce the results reported in "A Closer Look at Spatiotemporal Convolutions for Action Recognition" and a doubt has arisen. Why don't you use SVM with features extracted from R(2+1)D net?

In the previous papers, you use (Features from net + SVM):
C3D - VGG Architecture - Learning Spatiotemporal Features with 3D Convolutional Networks
Res3D - ResNet Architecture - ConvNet Architecture Search for Spatiotemporal Feature Learning

I'm trying to apply these architectures to detect pornography in videos and would like to know if is best use Features extracted from CNN + SVM or CNN + Softmax?

Machine guidelines

Any specific hardware configuration would you suggest performing training with R2+1D model?

the test accuracy of the same test dataset is different when using the same model

Hi,
@dutran , thank you for your great work.
I finetuned the r2plus1d model on my own dataset using train_net.py, then, I got the best test accuracy 0.72 and the corresponding model r2plus1d_3.mdl. However, when I use the same test dataset and the r2plus1d_3.mdl to run test_net.py, the test accuracy is low. It is about 0.2. And I also tried to extract the features using extract_features.py and then got the test accuracy using dense_prediciton_aggregation.py. The test accuracy is low too, it is at most 0.12.
It makes me feel confused. Why is the test accuracy so different?
I know the value of the decode_type may influence the test accuracy, but I wonder if there are any other reasons that could affect the test accuracy? Could you give me some advice? Thank you.

Questions about add new layers

Dear author,
Thanks for your great work. I met errors when I tried to add new layers to R2Plus1DīŧŒ could you give me some advice? We know the temporal dimension of the output of the final conv is 1/8 of the original, so I want to add ConvTranspose layer to make it back. My code and error message are as follows, I wrote all of the needed arguments but I still met the same error, so I have no idea what I missed. Any help would be highly appreciated.

My code:
upconv = brew.conv_transpose(
model,
blob_in = 'final_avg'
blob_out = 'upconv'
dim_in = 512,
dim_out = 2048,
kernels = [4,1,1],
weight_init = None,
bias_init = None,
use_cudnn = True,
order = 'NCHW',
cudnn_exhaustive_search = False,
ws_nbytes_limit = None,
strides = [2,1,1],
pads = [1,0,0]
)

Error message:
Traceback (most recent call last):
File "/home/xiongcx/R2Plus1D-master/tools/train_net.py", line 528, in
main()
File "/home/xiongcx/R2Plus1D-master/tools/train_net.py", line 523, in main
Train(args)
File "/home/xiongcx/R2Plus1D-master/tools/train_net.py", line 303, in Train
net_type=('prof_dag' if args.profiling == 1 else 'dag'),
File "/home/xiongcx/files/duhaijun/pytorch/build/caffe2/python/data_parallel_model.py", line 32, in Parallelize_GPU
Parallelize(*args, **kwargs)
File "/home/xiongcx/files/duhaijun/pytorch/build/caffe2/python/data_parallel_model.py", line 209, in Parallelize
losses = forward_pass_builder_fun(model_helper_obj, loss_scale)
File "/home/xiongcx/R2Plus1D-master/tools/train_net.py", line 240, in create_model_ops
pred_layer_name=args.pred_layer_name,
File "../lib/models/model_builder.py", line 134, in build_model
is_test=is_test,
File "../lib/models/r3d_model.py", line 101, in create_model
is_decomposed=(model_name == 'r2plus1d'),
File "../lib/models/r3d_model.py", line 228, in create_r3d
CDC1 = brew.conv_transpose(
File "/home/xiongcx/files/duhaijun/pytorch/build/caffe2/python/brew.py", line 107, in scope_wrapper
return func(*args, **new_kwargs)
TypeError: conv_transpose() takes at least 6 arguments (11 given)

Process aborting with "Insufficient data to determine video format" error when fine-tuning

Hi, I am trying to fine tune the pre-trained(Kinetics) R2Plus1D model on my dataset. I created the train and test LMDB of my dataset like this:

#Creating Training LMDB
python /home/rahul/R2Plus1D/data/create_video_db.py --list_file=/home/rahul/Dataset/train_test_lists/train_list.csv --output_file=/home/rahul/Dataset/train_test_lists/LMDB_Training --use_list=1

#Creating testing LMDB
python /home/rahul/R2Plus1D/data/create_video_db.py --list_file=/home/rahul/Dataset/train_test_lists/test_list.csv --output_file=/home/rahul/Dataset/train_test_lists/LMDB_Testing --use_list=1

The format of the csv files is:
org_video("the path of videos"),label("integer label of the video")

And then I run this to train the model:

python /home/rahul/R2Plus1D/tools/train_net.py \
--train_data=/home/rahul/Dataset/train_test_lists/LMDB_Training \
--test_data=/home/rahul/Dataset/train_test_lists/LMDB_Testing \
--model_name=r2plus1d --model_depth=18 \
--clip_length_rgb=16 --batch_size=4 \
--pretrained_model=/home/rahul/R2Plus1D/pre-trained-models/r2.5d_d18_l16.pkl \
--db_type='pickle' --is_checkpoint=0 \
--gpus=0,1 --base_learning_rate=0.0002 \
--epoch_size=40000 --num_epochs=8 --step_epoch=2 \
--weight_decay=0.005 --num_labels=14

But the process is getting aborted with these errors:

E0612 23:00:22.106964  9409 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.107067  9411 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.106992  9412 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.106918  9407 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.107008  9413 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.107035  9414 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:22.106915  9408 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469903  9409 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469926  9411 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469921  9413 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469907  9410 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469923  9412 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469954  9414 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469923  9407 video_decoder.cc:75] Insufficient data to determine video format
E0612 23:00:23.469995  9408 video_decoder.cc:75] Insufficient data to determine video format

..........
..........
.........
.........
.........

 Encountered CUDA error: device-side assert triggered Error from operator:
input: "gpu_0/comp_4_spatbn_1" input: "gpu_0/comp_4_conv_2_middle_w" input: "gpu_0/__m1_shared" output: "gpu_0/comp_4_conv_2_middle_w_grad" output: "gpu_0/__m2_shared" name: "" type: "ConvGradient" arg { name: "no_bias" i: 1 } arg { name: "kernels" ints: 1 ints: 3 ints: 3 } arg { name: "ws_nbytes_limit" i: 67108864 } arg { name: "exhaustive_search" i: 1 } arg { name: "strides" ints: 1 ints: 1 ints: 1 } arg { name: "pads" ints: 0 ints: 1 ints: 1 ints: 0 ints: 1 ints: 1 } arg { name: "order" s: "NCHW" } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN" is_gradient_op: true
E0612 23:00:23.986739  9419 net_dag.cc:195] Secondary exception from operator chain starting at '' (type 'SoftmaxWithLoss'): caffe2::EnforceNotMet: [enforce fail at context_gpu.h:156] . Encountered CUDA error: device-side assert triggered Error from operator:
input: "gpu_1/last_out_L14" input: "gpu_1/label" output: "gpu_1/softmax" output: "gpu_1/loss" name: "" type: "SoftmaxWithLoss" device_option { device_type: 1 cuda_gpu_id: 1 }
F0612 23:00:23.990535  9416 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggered
*** Check failure stack trace: ***
F0612 23:00:23.990537  9418 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990561  9420 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990689  9422 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990710  9421 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990717  9417 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.991060  9419 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggered
*** Check failure stack trace: ***
F0612 23:00:23.990537  9418 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990561  9420 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990689  9422 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990710  9421 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.990717  9417 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggeredF0612 23:00:23.991060  9419 context_gpu.h:107] Check failed: error == cudaSuccess device-side assert triggered
MyScripts/trainWithPretrainedModel.sh: line 10:  9358 Aborted 

Here is what stdout dump that I passed to a file:

Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file.
Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file

Can you please help me out of this?

Thanks,
Rahul Bhojwani

Finetune using OpticalFlow Models

Hi @dutran,

First, thank you for making public this wonderful work.

I've already been able to use this code to finetune R(2+1)D model on my dataset and run dense prediction with success.

Now I would like to use Optical Flow for computing accuracy rate of R(2+1)D-Two-Stream and I have some questions:
1 - Can you provide the R(2+1)D-Flow trained in "Kinetics", because I found only trained "Sports1M+Kinetics"?
2 - Can you provide an example or tutorial that give more details of how to finetune Optical flow model, I notice that Optical flow models need a lot of parameters to train: frame_gap_of, sampling_rate_of, clip_length_of, flow_data_type, do_flow_aggregation, flow_alg_type?

Best regards,
Murilo Varges

How to finetune the optical model on my own dataset?

Hi @dutran,

Thanks a lot for your answer to my last several questions. Now I am trying to finetune the pre-trained optical model on my own datasets, and I get another two questions:

1.how to generate optical flow from raw rgb videos?
2.how to finetune the optical model on the optical flow images generated from my datasets?

Could you please give me some advice or release the code for it?

Reg : Testing Model

I am trying to just use pretrained models to test. How do I generate clips for test data is that using ffmpeg or clips are generated in test_net.py?

training on 2 GPUs

I am training trying to train your model on 2 GPUs as shown in your script.
scripts/train_r2plus1d_kinetics.sh
Is it right that epoch size is 1M? Will I be able to reproduce your result in table 2 of paper using that script? I would just need to change the model type?

does examples in tutorials run with GPU?

I install opencv and caffe2 follow this using Ubuntu 16. Them I paste example code here to do feature extraction.
code:
python tools/extract_features.py
--test_data=/home/maliqi/Desktop/c3d/R2Plus1D/my_lmdb_data
--model_name=r2plus1d --model_depth=18 --clip_length_rgb=8
--gpus=0,1
--batch_size=2
--load_model_path=/home/maliqi/Desktop/c3d/R2Plus1D/r3d_d18_l8.pkl
--output_path=my_features.pkl
--features=softmax,final_avg,video_id
--sanity_check=0 --get_video_id=1 --use_local_file=1 --num_labels=1

But I got this error:
RuntimeError: [enforce fail at operator.cc:209] op. Cannot create operator of type 'MSRAFill' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "gpu_0/conv1_middle_w" name: "" type: "MSRAFill" arg { name: "shape" ints: 45 ints: 3 ints: 1 ints: 7 ints: 7 } device_option { device_type: 1 cuda_gpu_id: 0 }

I only have CPU, so I want to ask does this can ONLY run with GPU?

Training Kinetics from scratch

I want to train Kinetics from scratch,but videos i downloaded were with a irregular names,were not youtobeid name,is this normal ?all videos are in /tmp/kinetics dir,is not my point to dir "kinetics-400_train",is anyone meet this?

It's hard to finetune the R2D-18

I downloaded some models from Tutorial 5, and finetuned them on the UCF-101. Most of them work very well. But I found the pre-trained R2D-18 model is hard to finetune and improve the accuracy. Is it possible that the link is wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤ī¸ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.