Git Product home page Git Product logo

i3d_finetune's Introduction

Introduction

We release the entire code (both training phase & testing phase) for finetuning I3D model on UCF101.
I3D paper:Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Please also refer to kinetics-i3d for models and details about I3D.

Prerequisites

Software

  • Ubuntu 16.04.3 LTS
  • Python 2.7
  • CUDA8
  • CuDNN v6
  • Tensorflow 1.4.1
  • Sonnet

Hardware

GTX 1080 Ti

How to run

1. Clone this repo

git clone https://github.com/USTC-Video-Understanding/I3D_Finetune

2. Download kinetics pretrained I3D models

In order to finetune I3D network on UCF101, you have to download Kinetics pretrained I3D models provided by DeepMind at here. Specifically, download the repo kinetics-i3d and put the data/checkpoints folder into data subdir of our I3D_Finetune repo:

git clone https://github.com/deepmind/kinetics-i3d
cp -r kinetics-i3d/data/checkpoints I3D_Finetune/data

3. Create list files

We use list files in data/ucf101/ subdir to make the code find RGB images and flow data saved on disk. You have to adapt the list files to make sure the list files contain the right path to your data. Specifically, for RGB data, you have to update data/ucf101/rgb.txt. Each line in in this file should be in the format:

dir_name_of_imgs_of_a_video /path/to/img_dir num_imgs label

For example, if your RGB data of UCF101 is saved in '/data/user/ucf101/rgb', and there are 13320 subdirs in this folder, each subdir contains images from a video. If in subdir v_BalanceBeam_g14_c02, there are 96 images, and the ground truth of this video is 4, then the line for this subdir is:

v_BalanceBeam_g14_c02 /data/user/ucf101/rgb/v_BalanceBeam_g14_c02 96 4

Similarly, update data/ucf101/flow.txt for flow data. Note: we use one file to include x and y part of flow data, so we use {:s} in each line to placehold x or y in the data path. For example, if your flow data are placed like this:

|---tvl1_flow
|   |---x
|   |--- y

then you can write each line in flow.txt like this:

v_Archery_g01_c06 /data4/zhouhao/dataset/ucf101/tvl1_flow/{:s}/v_Archery_g01_c06 107 2

i.e, use {:s} replace x or y in path. If you are confused, please refer our code to see data loading details.

4. Train on UCF101 on RGB data and flow data

# Finetune on split1 of RGB data of UCF101
CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1
# Finetune on split2 of flow data of UCF101
CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 flow 2 

We share our trained models on UCF101(RGB & FLOW) in GoogleDrive and BaiduDisk (password:ddar). You can download these models and put them in model folder of this repo. In this way you can skip the train commands above and directly run test in the next step.

5. test on UCF101 on RGB data and flow data

After you have trained the model, you can run the test procedure. First, please update _DATA_ROOT and _CHECKPOINT_PATHS in test.py by setting the value to right location of your dataset and your trained model generated in the previous step, respectively.
Then you can run testing using below commands:

# run testing on the split1 of RGB data of UCF101 
CUDA_VISIBLE_DEVICES=0 python test.py ucf101 rgb 1
# run testing on the split1 of flow data of UCF101
CUDA_VISIBLE_DEVICES=0 python test.py ucf101 flow 1
# run testing both on RGB and flow data of split1 of UCF101
CUDA_VISIBLE_DEVICES=0 python test.py ucf101 mixed 1

Results

Our training results on UCF-101 Split-1 are:

Training Split RGB Flow Fusion
  Split1         94.7%       96.3% 97.6%

Thanks to tf.Dataset API, we can achieve training speed at 1s/batch(64 frames)!

Contact

This work is mainly done by Hao Zhou (Rhythmblue) and Hezhen Hu (AlexHu123). If you have any questions, please create an issue in this repo. We are very happy to hear from you!

i3d_finetune's People

Contributors

alexhu123 avatar vra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

i3d_finetune's Issues

where to get the tvl1_flow from?

Similarly, update data/ucf101/flow.txt for flow data. Note: we use one file to include x and y part of flow data, so we use {:s} in each line to placehold x or y in the data path. For example, if your flow data are placed like this:

|---tvl1_flow
| |---x
| |--- y
then you can write each line in flow.txt like this:

v_Archery_g01_c06 /data4/zhouhao/dataset/ucf101/tvl1_flow/{:s}/v_Archery_g01_c06 107 2
i.e, use {:s} replace x or y in path. If you are confused, please refer our code to see data loading details.

can anyone tell me how to get the tvl1_flow from? how can i make flow.txt file using it?

非常感谢作者的代码,请问为什么我在运行test.py的时候总是显示读取数据为0,进而在计算精度的时候因为除数为0而报错,我自己更新了rgb.txt以及flow.txt,其他的txt文本是默认的

----Here we start!----
Output wirtes to output/test-ucf101-rgb-1
Traceback (most recent call last):
File "test.py", line 268, in
main(**vars(p.parse_args()))
File "test.py", line 244, in main
accuracy = true_count / video_size
ZeroDivisionError: division by zero
Exception in thread Thread-1:
Traceback (most recent call last):
File "/data/amax/envs/LEE-tf/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/data/amax/envs/LEE-tf/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/data/amax/LEE/I3D_Finetune-master/lib/feed_queue.py", line 63, in fetch_queue
if self.bridge_queue.empty():
File "", line 2, in empty
File "/data/amax/envs/LEE-tf/lib/python2.7/multiprocessing/managers.py", line 759, in _callmethod
kind, result = conn.recv()
EOFError

Multi GPU training support

Hi, Anyone have worked on multi-gpu training ?
As the dataset becomes large, multi-gpu training becomes crucial.
@vra @AlexHu123 @Rhythmblue Any plan for adding support for multi-gpu or any pointers would be really helpful. Thanks.
Really appreciate your work. Thanks for sharing them with the community.

An error when running finetune.py

When I run the finetune.py an error ocurred. Below is the info:

Traceback (most recent call last):
File "finetune.py", line 305, in
main(**vars(p.parse_args()))
File "finetune.py", line 237, in main
feed_dict={dropout_holder: _DROPOUT, is_train_holder: True})
File "/home/liuxuepdf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/liuxuepdf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/liuxuepdf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/liuxuepdf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: 'NoneType' object is not iterable
[[Node: PyFunc = PyFunc[Tin=[DT_STRING, DT_STRING, DT_STRING], Tout=[DT_FLOAT, DT_INT64], token="pyfunc_0"](arg0, PyFunc/input_1, PyFunc/input_2)]]
[[Node: IteratorGetNext = IteratorGetNextoutput_shapes=[, ], output_types=[DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Could you please tell me how to solve this error? Thx

请问下完全按照默认配置运行,显卡也是1080,报这个显存低的警告是正常的吗?

(base) junyi@junyi-all-series:/sda1/github/I3D_Finetune$ CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1
2018-07-18 10:12:48.698756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:05:00.0
totalMemory: 10.91GiB freeMemory: 10.35GiB
2018-07-18 10:12:48.698786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from ./data/checkpoints/rgb_imagenet/model.ckpt
----Here we start!----
Output wirtes to output/finetune-ucf101-rgb-1
2018-07-18 10:13:08.888794: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.61GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-07-18 10:13:08.901823: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.82GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-07-18 10:13:08.974460: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.89GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-07-18 10:13:09.062963: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.54GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-07-18 10:13:09.117757: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.23GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
Epoch1, train accuracy: 0.042
Epoch2, train accuracy: 0.500

Reproduce on flow

Hi,
Thanks a lot for sharing the code!
Could you please sharing your hyper parameter setting on flow? Is data augmnetation only random crop and flip? I cannot reproduce the results on flow.
Thanks a lot!

License

i3d.py is licensed under Apache License 2.0 - is the rest of the code in this repository being released under the same license as well? Many thanks!

how to solve this problem that program is killed when I run "CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1" command in the terminal?

The error infomation is reported as follows:

root@roronoa:~/AI/action_recognition/I3D_Finetune# CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1
2019-03-28 14:58:42.898076: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO:tensorflow:Restoring parameters from ./data/checkpoints/rgb_imagenet/model.ckpt
----Here we start!----
Output wirtes to output/finetune-ucf101-rgb-1
已杀死

pelease allow me submit three questions:
Firstly, I have a GPU named "GeForce GTX 1080" with 8GB size of memory , Is enough to process these frames?
Secondly, when I run "CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 rgb 1" command in terminal , but as you can see in the error report , GPU is not used , why?
Finally, why the program is killed in the end? Must the "finetune.py" use the GPU?

Rescaling

I realised that the test accuracy is not high, because your RGB training input is [-1,1] range but test input is [0,1] range.

How did the pretrained models trained? With [-1,1] range, or [0,1] range? I think the best is to make it consistent.

Where to find zh_lib?

Can't find some functions used in test.py. Please provide the code for the following modules.
from zh_lib.dataset import ActionDataset
from zh_lib.load_data import load_info
from zh_lib.feed_queue import FeedQueue
from zh_lib.label_trans import

about test.py

Hi, i have noticed that there is a

rgb_fc_out = tf.layers.dense(rgb_logits_dropout, _CLASS_NUM[dataset], use_bias=True)

in your test.py. I don't understand about this. This layer looks like have no pretrained parameters, can anybody tell me what the function of this layer is?

About accuracy

Hello! I just followed the steps of the project to process the data and try to train, but unfortunately for the UCF101 dataset (I chose two videos for each class) my training accuracy was incredibly low – preferably only five percent! Can someone help me answer my doubts?

Missing Frames Errors from RGB Test File

When running the test code via CUDA_VISIBLE_DEVICES=0 python test.py ucf101 rgb 1, the number of frames in multiple entries in the data/ucf101/rgb.txt file does not match with the flow and rgb frames downloaded from https://github.com/feichtenhofer/twostreamfusion, resulting in the test code halting at various frames. This issue states that these sources were used for testing. For example, FileNotFoundError: [Errno 2] No such file or directory: /jpegs_256/v_Bowling_g05_c07/frame000113.jpg'
In all cases where the error occurs, 1-3 frames are missing. Was there another version of the ucf101 rgb and flow frames used during testing? Please advise.

pre-load problem

when i try to preload /data/checkpoints/rgb_scratch_kin600
NotFoundError (see above for traceback): Key RGB/inception_i3d/Conv3d_1a_7x7/batch_norm/beta not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
[[Node: save/RestoreV2_188/_87 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_169_save/RestoreV2_188", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

can we run this using python3?

hey there i am facing a problem while running this code using python3?
Following is the error i am getting::

CUDA_VISIBLE_DEVICES=0 python3 finetune.py ucf101 rgb 1

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:From finetune.py:67: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.

WARNING:tensorflow:From /home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/data/ops/iterator_ops.py:358: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/kt-gpu2/I3D_Finetune/i3d.py:466: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob.
WARNING:tensorflow:From finetune.py:164: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:From /home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-06-21 10:00:14.026572: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-06-21 10:00:14.027276: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x108608f0 executing computations on platform CUDA. Devices:
2019-06-21 10:00:14.027313: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1
2019-06-21 10:00:14.051709: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3392080000 Hz
2019-06-21 10:00:14.052425: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x108c9540 executing computations on platform Host. Devices:
2019-06-21 10:00:14.052466: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-06-21 10:00:14.052661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.59GiB
2019-06-21 10:00:14.052688: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-06-21 10:00:14.053806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-06-21 10:00:14.053831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-06-21 10:00:14.053843: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-06-21 10:00:14.053963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5425 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from ./data/checkpoints/rgb_imagenet/model.ckpt
----Here we start!----
Output wirtes to output/finetune-ucf101-rgb-1
2019-06-21 10:00:38.094397: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
2019-06-21 10:00:38.313329: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.313554: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.313737: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.313908: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.314015: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.314833: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.315202: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.315623: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.316223: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.316574: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.316712: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.316893: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.317196: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.318366: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.319143: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.319225: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.319289: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.319359: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.319773: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.320081: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.320165: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.321204: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.321324: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.321419: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.321539: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.321618: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.322982: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.323504: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.323633: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.323859: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.323959: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.324091: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.324178: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.324518: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326011: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326203: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326359: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326454: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326676: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.326851: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2019-06-21 10:00:38.327013: W tensorflow/core/framework/op_kernel.cc:1389] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

Traceback (most recent call last):
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

 [[{{node PyFunc}}]]
 [[{{node IteratorGetNext}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "finetune.py", line 293, in
main(**vars(p.parse_args()))
File "finetune.py", line 232, in main
feed_dict={dropout_holder: _DROPOUT, is_train_holder: True})
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/home/kt-gpu2/.local/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 207, in call
ret = func(*args)

File "finetune.py", line 75, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/kt-gpu2/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/kt-gpu2/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

 [[{{node PyFunc}}]]
 [[node IteratorGetNext (defined at finetune.py:140) ]]

code_zip.zip

please explain variables:
_CLIP_SIZE
_EACH_VIDEO_TEST_SIZE
_FRAME_SIZE

accuarcy

hello,when i use your pre-trained model(rgb_0.946) to test ucf-101 split1, its accuracy is around 0.91!

Download data in correct format?

Is there somewhere I can download data for training in the already correct format? (with flow files, and RGB files, and the corresponding txt files). I'd like to go through the code line by line, but need some data to run and understand it.

Thanks!

Reproduce results

Hi,
I'm trying to reproduce the results reported in the paper (and this repository) with the I3D model trained with only RGB on UCF-101 (split 1) .
When I run test.py with the checkpoint provided in this repository (finetuned on UCF-101 RGB), I get a score of 89.4 . The authors of this repository report 94.7 .
Do you have any idea why is there a discrepancy in the scores?

train UCF101 from scratch accuracy

Hi, Thanks a lot for your excellent work.
I have been trying to train UCF101 from scratch without Kinetics or ImageNet pretrained these days using your code. But something's wrong with the test accuracy.
When I trained about 33 epoches, the loss is around 0.1, train accuracy is around 93.6%, test accuracy is around 58%. But when i run 'test.py', the accuracy is only about 8% which is very weird.
I think the test data should be the same in your code 'finetune.py' and 'test.py', but accuracy is quite different. Or is there anything i've been always missing.
My only change to your code is changing 'saver.restore(sess, _CHECKPOINT_PATHS[train_data.mode+'_imagenet'])' line 211 in 'finetune.py' to '#saver.restore(sess, _CHECKPOINT_PATHS[train_data.mode+'_imagenet'])' for learning from scratch.
Please share your thoughts with me, or anyone faced same problem, please let me know.

std::bad_alloc – massive 60GB memory leak?

Hi – I'm trying to finetune with two classes.
Even running the while loop for a single iteration causes the program to take >60GB of memory, which causes

terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
/var/spool/PBS/mom_priv/jobs/1224593.pbs.SC: line 7: 13498 Aborted (core dumped) python finetune.py ucf101 rgb 1

I'm running python2 with sonnet v1.23, the frame size for my jpeg sequences are 224 and they consist of 200 frames. Any ideas on how to fix?

python Demo_Transfer_rgb.py

Demo_Transfer_rgb.py [-h] dataset_name data_tag
python Demo_Transfer_rgb.py need two arguments dataset_name data_tag

How to create flow.txt and rgb.txt files?

I've extracted the images of RGB and flow to the local path, but how do I create the corresponding TXT file? anyone who can share the script?Thank you very much!

ACCURACY~

MY WORK UCF101 RBG-->90.1 O-F-->83.3
WHEN I TRAIN THE I3D NETWORK AT THE OPTICAL-FLOW PART, USING THE PRE TRAINED IMAGENET MODEL, 40 EPOCHES LATER, IT ACHIEVES ACC AT 83.3%. WHETHER THE HYPERPARAMETERS SHOULD BE CHANGED BEFORE I USE DIFFERENT INPUTS (MOTION & APPEARENCE) ? IF NOT, WHAT'S THE PROBABLE PROBLEM OCUURED DURING MY WORKS?

IT'S UNBELIEVABLE WHEN I USE THE SAME GPU, 1080TI, AND THE FRAME NUM SHOULD BE DECRASED TO 32. OTHERWISE, IT MINGT SHOW WARNINGS, REQUIRE MORE THAN 2.61G MEMORY TO ACHIEVE NORMAL PERFORMANCE.

Trained Models

Hello Rhythmblue

I am traying to use your pretrained models in UCF101.
In this site I found 2 links to the models (google drive and the other in Baidu)
The google drive version is in you trash, can you restore one more time please ?

Regards, Reinier

accuracy

2018-09-13 04:56:39.376762: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.61GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

volatile GPU-Util is very low,where can I adjust in the codes to reduce memory.

67.56 accuracy test ucf101 mixed 1

Hello, sorry to disturb you~ here are some questions when running the projects:

  1. When I run the test.py on ucf101 mixed 1, i got only 0.6756 accuracy while rgb model is 0.89, and flow model is 0.95.

  2. Also I found that test.py #196, "input_label=np.array([label]).reshape(-1)", the input_label sometimes is not matched to the video when testing, e.g. v_ApplyEyeMakeup_g07_c07, prediction and video name are 'ApplyEyeMakeup', but input_label changed to ApplyLipstick's index.
    so I change the code
    video_label = rgb_data.Videos[i].label
    video_label = flow_data.Videos[i].label
    input_label=np.array([video_label]).reshape(-1)

but the accuracy is not improved.

When I train my data set,I got this problem. Why? num_class=2

InvalidArgumentError (see above for traceback): buffer_size must be greater than zero.
[[Node: ShuffleDataset = ShuffleDataset[_class=["loc:@iterator"], output_shapes=[[]], output_types=[DT_FLOAT], reshuffle_each_iteration=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](TensorSliceDataset, buffer_size, buffer_size, buffer_size)]]

why aren't the loss and the accuracy in training stable ? Is it relative with my disordered data in rgb.txt?

The result in training are as follows:
step: 1540, loss: 1.3989, accuracy: 0.588 (13.88 sec/batch)
step: 1560, loss: 1.2763, accuracy: 0.625 (12.85 sec/batch)
step: 1580, loss: 2.9938, accuracy: 0.613 (12.66 sec/batch)
step: 1600, loss: 1.3215, accuracy: 0.700 (1.01 sec/batch)
step: 1620, loss: 0.4965, accuracy: 0.713 (17.85 sec/batch)
step: 1640, loss: 1.4940, accuracy: 0.688 (4.32 sec/batch)
step: 1660, loss: 0.4554, accuracy: 0.613 (5.13 sec/batch)
step: 1680, loss: 1.0088, accuracy: 0.662 (6.21 sec/batch)
step: 1700, loss: 0.1014, accuracy: 0.700 (5.01 sec/batch)
step: 1720, loss: 0.2080, accuracy: 0.713 (6.62 sec/batch)
step: 1740, loss: 1.7426, accuracy: 0.725 (3.81 sec/batch)
step: 1760, loss: 1.6926, accuracy: 0.738 (12.97 sec/batch)
step: 1780, loss: 0.3962, accuracy: 0.688 (18.84 sec/batch)
step: 1800, loss: 1.0156, accuracy: 0.738 (3.54 sec/batch)
step: 1820, loss: 0.4570, accuracy: 0.713 (11.37 sec/batch)
step: 1840, loss: 0.1834, accuracy: 0.750 (1.00 sec/batch)
step: 1860, loss: 1.3550, accuracy: 0.600 (5.51 sec/batch)
step: 1880, loss: 0.1034, accuracy: 0.812 (4.54 sec/batch)
step: 1900, loss: 3.2641, accuracy: 0.750 (2.22 sec/batch)
step: 1920, loss: 2.3116, accuracy: 0.637 (13.77 sec/batch)
step: 1940, loss: 0.4947, accuracy: 0.775 (17.61 sec/batch)
step: 1960, loss: 1.2279, accuracy: 0.613 (1.03 sec/batch)
step: 1980, loss: 1.3556, accuracy: 0.650 (1.05 sec/batch)

the data in rgb.txt,It's disordered :

v_HighJump_g25_c04 /media/zoro/ZORO/UCF-101-FRAME/HighJump/v_HighJump_g25_c04 96 39
v_HighJump_g25_c05 /media/zoro/ZORO/UCF-101-FRAME/HighJump/v_HighJump_g25_c05 96 39
v_RopeClimbing_g19_c05 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g19_c05 164 74
v_YoYo_g12_c04 /media/zoro/ZORO/UCF-101-FRAME/YoYo/v_YoYo_g12_c04 171 100
v_RopeClimbing_g13_c02 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c02 77 74
v_RopeClimbing_g13_c03 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c03 77 74
v_RopeClimbing_g13_c01 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c01 77 74
v_RopeClimbing_g13_c06 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c06 98 74
v_RopeClimbing_g13_c07 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c07 84 74
v_RopeClimbing_g13_c04 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c04 89 74
v_RopeClimbing_g13_c05 /media/zoro/ZORO/UCF-101-FRAME/RopeClimbing/v_RopeClimbing_g13_c05 77 74
v_IceDancing_g04_c02 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c02 249 43
v_IceDancing_g04_c03 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c03 255 43
v_IceDancing_g04_c01 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c01 255 43
v_IceDancing_g04_c06 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c06 261 43
v_HighJump_g25_c02 /media/zoro/ZORO/UCF-101-FRAME/HighJump/v_HighJump_g25_c02 120 39
v_IceDancing_g04_c04 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c04 250 43
v_IceDancing_g04_c05 /media/zoro/ZORO/UCF-101-FRAME/IceDancing/v_IceDancing_g04_c05 249 43
v_Fencing_g25_c01 /media/zoro/ZORO/UCF-101-FRAME/Fencing/v_Fencing_g25_c01 123 27
v_HighJump_g25_c03 /media/zoro/ZORO/UCF-101-FRAME/HighJump/v_HighJump_g25_c03 85 39
v_PlayingTabla_g03_c05 /media/zoro/ZORO/UCF-101-FRAME/PlayingTabla/v_PlayingTabla_g03_c05 254 65
v_CricketShot_g16_c02 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c02 100 23
v_CricketShot_g16_c03 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c03 98 23
v_ThrowDiscus_g19_c01 /media/zoro/ZORO/UCF-101-FRAME/ThrowDiscus/v_ThrowDiscus_g19_c01 68 92
v_CricketShot_g16_c01 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c01 104 23
v_CricketShot_g16_c06 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c06 57 23
v_CricketShot_g16_c07 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c07 50 23
v_CricketShot_g16_c04 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c04 73 23
v_CricketShot_g16_c05 /media/zoro/ZORO/UCF-101-FRAME/CricketShot/v_CricketShot_g16_c05 47 23
v_SoccerPenalty_g04_c02 /media/zoro/ZORO/UCF-101-FRAME/SoccerPenalty/v_SoccerPenalty_g04_c02 72 84
v_CleanAndJerk_g08_c03 /media/zoro/ZORO/UCF-101-FRAME/CleanAndJerk/v_CleanAndJerk_g08_c03 110 20
v_CleanAndJerk_g08_c02 /media/zoro/ZORO/UCF-101-FRAME/CleanAndJerk/v_CleanAndJerk_g08_c02 107 20
v_CleanAndJerk_g08_c01 /media/zoro/ZORO/UCF-101-FRAME/CleanAndJerk/v_CleanAndJerk_g08_c01 236 20
v_CleanAndJerk_g08_c04 /media/zoro/ZORO/UCF-101-FRAME/CleanAndJerk/v_CleanAndJerk_g08_c04 278 20
v_Typing_g19_c01 /media/zoro/ZORO/UCF-101-FRAME/Typing/v_Typing_g19_c01 213 94
v_LongJump_g07_c02 /media/zoro/ZORO/UCF-101-FRAME/LongJump/v_LongJump_g07_c02 171 50
v_LongJump_g07_c03 /media/zoro/ZORO/UCF-101-FRAME/LongJump/v_LongJump_g07_c03 102 50
v_LongJump_g07_c01 /media/zoro/ZORO/UCF-101-FRAME/LongJump/v_LongJump_g07_c01 123 50
v_LongJump_g07_c04 /media/zoro/ZORO/UCF-101-FRAME/LongJump/v_LongJump_g07_c04 82 50
v_LongJump_g07_c05 /media/zoro/ZORO/UCF-101-FRAME/LongJump/v_LongJump_g07_c05 118 50
v_BodyWeightSquats_g18_c03 /media/zoro/ZORO/UCF-101-FRAME/BodyWeightSquats/v_BodyWeightSquats_g18_c03 211 14
v_Surfing_g24_c01 /media/zoro/ZORO/UCF-101-FRAME/Surfing/v_Surfing_g24_c01 249 87
v_HorseRace_g20_c02 /media/zoro/ZORO/UCF-101-FRAME/HorseRace/v_HorseRace_g20_c02 327 40

Question about the number of test samples?

Thank your for your interesting code.

I tried to reproduce the experimental results, and I found that in testing phrase, there are only 3755 samples when I do 'python test.py ucf101 rgb 1'. Actually, it should have 3783 samples.

Could you explain why they are different? Or maybe I make some mistakes.

Thank you again.

Training on Own Dataset

Hello,
thank you for the work. Is there an easy way to train the Models on my own Dataset, and some new Classes? I have tried it with 9 Classes(created _Classnum), but I get 2 Errors:

python3 finetune.py ucf9 rgb 1
/usr/local/lib/python3.5/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
2018-06-12 16:28:40.489353: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-12 16:28:40.576145: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-06-12 16:28:40.576407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties:
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.42GiB
2018-06-12 16:28:40.576421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-06-12 16:28:41.162533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-12 16:28:41.162570: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-06-12 16:28:41.162575: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-06-12 16:28:41.162758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3151 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0)
INFO:tensorflow:Restoring parameters from ./data/checkpoints/rgb_imagenet/model.ckpt
----Here we start!----
Output wirtes to output/finetune-ucf9-rgb-1
2018-06-12 16:29:14.904305: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/script_ops.py", line 147, in call
ret = func(*args)

File "finetune.py", line 73, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/chris/Desktop/Action/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/chris/Desktop/Action/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

2018-06-12 16:29:14.904819: W tensorflow/core/framework/op_kernel.cc:1261] Invalid argument: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/script_ops.py", line 147, in call
ret = func(*args)

File "finetune.py", line 73, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/chris/Desktop/Action/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/chris/Desktop/Action/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "finetune.py", line 291, in
main(**vars(p.parse_args()))
File "finetune.py", line 230, in main
feed_dict={dropout_holder: _DROPOUT, is_train_holder: True})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/script_ops.py", line 147, in call
ret = func(*args)

File "finetune.py", line 73, in process_video
clip_seq, label_seq = data.next_batch(1, _CLIP_SIZE)

File "/home/chris/Desktop/Action/I3D_Finetune/lib/action_dataset.py", line 33, in next_batch
self.videos[self.perm[i]].get_frames(frame_num, data_augment=data_augment))

File "/home/chris/Desktop/Action/I3D_Finetune/lib/video_3d.py", line 41, in get_frames
frames.extend(self.load_img((i-1)%self.total_frame_num+1))

TypeError: 'NoneType' object is not iterable

 [[Node: PyFunc = PyFunc[Tin=[DT_STRING, DT_STRING, DT_STRING], Tout=[DT_FLOAT, DT_INT64], token="pyfunc_0"](arg0, PyFunc/input_1, PyFunc/input_2)]]
 [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[<unknown>, <unknown>], output_types=[DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]
 [[Node: IteratorGetNext/_467 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_141_IteratorGetNext", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

The First Error appears a dozen times. The second only two times.
I can't find whats the problem here. Maybe u can help me.

Dependencies installation failed on macOS Catalina ( 10.15.7 )

Hey there all,

I am using Anaconda 4.10.3 and trying to run I3D_Fine on macOS, but I am getting conflicts error when executing the command:
conda create -n env_sonnet "tensorflow<2" "dm-sonnet<2" "tensorflow-probability==0.7.0"
The error says:
**Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package tensorflow conflicts for:
dm-sonnet[version='<2'] -> tensorflow-probability[version='<0.7'] -> tensorflow[version='>=1.11.0']
dm-sonnet[version='<2'] -> tensorflow[version='>=1.5.0|>=1.8.0']
tensorflow[version='<2']
tensorflow-probability==0.7.0 -> tensorflow[version='>=1.14.0']

Package tensorflow-probability conflicts for:
tensorflow-probability==0.7.0
dm-sonnet[version='<2'] -> tensorflow-probability[version='<0.7']

Package numpy-base conflicts for:
tensorflow[version='<2'] -> numpy[version='>=1.13.3'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.17.2.|1.17.3.|1.17.4.|1.18.1.|1.18.5.|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37h9797aa9_6|py36h9797aa9_6|py36ha9ae307_6|py37h9797aa9_7|py37ha9ae307_7|py35h9797aa9_7|py35ha9ae307_7|py36h9797aa9_7|py27h9797aa9_7|py27ha9ae307_7|py27h9797aa9_7|py37h9797aa9_7|py36h9797aa9_7|py27ha9ae307_7|py37ha9ae307_7|py27h9797aa9_8|py36ha9ae307_8|py37ha9ae307_8|py37h9797aa9_8|py35ha9ae307_8|py37he97cb71_9|py36h8a80b8c_9|py36he97cb71_9|py37h42e5f7b_10|py36h8a80b8c_10|py27h8a80b8c_10|py35h8a80b8c_10|py27ha711998_10|py36ha711998_10|py37ha711998_11|py27h6575580_11|py36h6575580_12|py37ha711998_12|py36ha711998_12|py27ha711998_12|py38ha711998_12|py38ha711998_13|py38ha711998_15|py36ha711998_0|py36h7ef55bc_1|py35h479e554_1|py27h479e554_1|py36ha9ae307_0|py35h9797aa9_0|py27h9797aa9_1|py37ha9ae307_1|py27ha9ae307_1|py36ha9ae307_1|py27h9797aa9_2|py37ha9ae307_2|py36h9797aa9_3|py37h9797aa9_3|py27ha9ae307_3|py37ha9ae307_3|py27ha9ae307_4|py27h9797aa9_4|py37ha9ae307_4|py35ha711998_4|py35h8a80b8c_4|py27ha711998_4|py38h6575580_4|py36ha711998_5|py36h6575580_5|py37ha711998_5|py27he97cb71_0|py37h8a80b8c_0|py27h8a80b8c_0|py36h8a80b8c_0|py37he97cb71_0|py35h8a80b8c_0|py37h42e5f7b_0|py36h8a80b8c_0|py35h42e5f7b_0|py27h42e5f7b_0|py27h8a80b8c_0|py37h8a80b8c_0|py35h8a80b8c_0|py36ha711998_0|py37ha711998_0|py37h8a80b8c_0|py35h8a80b8c_0|py37ha711998_0|py36h8a80b8c_0|py27h8a80b8c_0|py36ha711998_0|py35ha711998_0|py36ha711998_1|py37h8a80b8c_1|py37h8a80b8c_0|py37ha711998_0|py36ha711998_0|py27h8a80b8c_0|py36h8a80b8c_0|py27h8a80b8c_0|py37h8a80b8c_0|py36ha711998_0|py37ha711998_0|py36h6575580_0|py37ha711998_0|py36ha711998_0|py37h6575580_1|py36ha711998_1|py36h6575580_1|py36ha711998_0|py37ha711998_0|py27h6575580_0|py37h6575580_0|py37h6575580_1|py27ha711998_1|py36h6575580_1|py36ha711998_0|py37ha711998_0|py27h6575580_0|py37h6575580_0|py36ha711998_0|py37h6575580_0|py37ha711998_0|py36ha711998_0|py37ha711998_0|py37h6575580_0|py37ha711998_0|py36ha711998_0|py27h6575580_0|py37ha711998_0|py36ha711998_0|py27ha711998_0|py39hd037223_1|py39hd037223_3|py36hd037223_3|py37hd037223_3|py38hd037223_3|py39h4850c68_3|py36h4850c68_3|py36h68fea81_0|py38h68fea81_0|py37hcfb5961_0|py36hcfb5961_0|py37hcfb5961_0|py38h68fea81_0|py39h3a452eb_0|py39h585ceec_0|py37h3a452eb_0|py39h3a452eb_0|py38h585ceec_0|py37hbbe2e76_0|py38hbbe2e76_0|py38he0bd621_0|py37he0bd621_0|py37hbbe2e76_0|py38hbbe2e76_0|py37he0bd621_0|py38he0bd621_0|py37hbbe2e76_0|py37he0bd621_0|py38hbbe2e76_0|py310he490955_0|py310h4c014f7_0|py39he0bd621_0|py38he0bd621_0|py39hbbe2e76_0|py39he0bd621_0|py39hbbe2e76_0|py39he0bd621_0|py39hbbe2e76_0|py38h3a452eb_0|py37h585ceec_0|py39hde55871_0|py37h68fea81_0|py36h68fea81_0|py38hcfb5961_0|py38hcfb5961_0|py36hcfb5961_0|py37h68fea81_0|py37h4850c68_3|py38h4850c68_3|py39h7ca7dfd_1|py27h6575580_0|py38ha711998_0|py38h6575580_0|py37h6575580_0|py36h6575580_0|py37h6575580_0|py36h6575580_0|py27ha711998_0|py36h6575580_0|py27h6575580_0|py27ha711998_0|py36h6575580_0|py27h6575580_0|py27ha711998_0|py36h6575580_0|py27ha711998_0|py37ha711998_1|py27h6575580_1|py36ha711998_1|py36h6575580_0|py27ha711998_0|py27h6575580_1|py37ha711998_1|py27ha711998_1|py36h6575580_0|py27h6575580_0|py37h6575580_0|py27ha711998_0|py27h6575580_0|py37h6575580_0|py27ha711998_0|py36h8a80b8c_0|py27ha711998_0|py36h8a80b8c_1|py27h8a80b8c_1|py37ha711998_1|py27ha711998_1|py27ha711998_0|py35ha711998_0|py27ha711998_0|py36h42e5f7b_0|py35he97cb71_0|py36he97cb71_0|py37h6575580_5|py27h6575580_5|py27ha711998_5|py38ha711998_4|py27h8a80b8c_4|py36h8a80b8c_4|py37h8a80b8c_4|py37ha711998_4|py36ha711998_4|py35ha9ae307_4|py35h9797aa9_4|py37h9797aa9_4|py36h9797aa9_4|py36ha9ae307_4|py27h9797aa9_3|py36ha9ae307_3|py27ha9ae307_2|py36ha9ae307_2|py36h9797aa9_2|py37h9797aa9_2|py37h9797aa9_1|py36h9797aa9_1|py35ha9ae307_0|py27ha9ae307_0|py27h9797aa9_0|py36h9797aa9_0|py36ha9ae307_0|py35ha9ae307_0|py36h9797aa9_0|py27ha9ae307_0|py35h9797aa9_0|py27h9797aa9_0|py35h7ef55bc_1|py27h7ef55bc_1|py36h479e554_1|py36h6575580_0|py38h6575580_15|py38h6575580_14|py38ha711998_14|py38h6575580_13|py38h6575580_12|py37h6575580_12|py27h6575580_12|py36h6575580_11|py37h6575580_11|py36ha711998_11|py27ha711998_11|py37ha711998_10|py35ha711998_10|py36h42e5f7b_10|py37h8a80b8c_10|py35h42e5f7b_10|py27h42e5f7b_10|py35he97cb71_9|py27he97cb71_9|py35h8a80b8c_9|py27h8a80b8c_9|py37h8a80b8c_9|py35h9797aa9_8|py36h9797aa9_8|py27ha9ae307_8|py36ha9ae307_7|py36ha9ae307_7|py37ha9ae307_6|py27ha9ae307_6|py27h9797aa9_6']
tensorflow-probability==0.7.0 -> numpy[version='>=1.13.3'] -> numpy-base[version='1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.17.2.
|1.17.3.|1.17.4.|1.18.1.|1.18.5.|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0',build='py36ha711998_0|py36h7ef55bc_1|py35h479e554_1|py27h479e554_1|py36ha9ae307_0|py35h9797aa9_0|py27h9797aa9_1|py37ha9ae307_1|py27ha9ae307_1|py36ha9ae307_1|py27h9797aa9_2|py37ha9ae307_2|py36h9797aa9_3|py37h9797aa9_3|py27ha9ae307_3|py37ha9ae307_3|py27ha9ae307_4|py27h9797aa9_4|py37ha9ae307_4|py35ha711998_4|py35h8a80b8c_4|py27ha711998_4|py38h6575580_4|py36ha711998_5|py36h6575580_5|py37ha711998_5|py27he97cb71_0|py37h8a80b8c_0|py27h8a80b8c_0|py36h8a80b8c_0|py37he97cb71_0|py35h8a80b8c_0|py37h42e5f7b_0|py36h8a80b8c_0|py35h42e5f7b_0|py27h42e5f7b_0|py27h8a80b8c_0|py37h8a80b8c_0|py35h8a80b8c_0|py36ha711998_0|py37ha711998_0|py37h8a80b8c_0|py35h8a80b8c_0|py37ha711998_0|py36h8a80b8c_0|py27h8a80b8c_0|py36ha711998_0|py35ha711998_0|py36ha711998_1|py37h8a80b8c_1|py37h8a80b8c_0|py37ha711998_0|py36ha711998_0|py27h8a80b8c_0|py36h8a80b8c_0|py27h8a80b8c_0|py37h8a80b8c_0|py36ha711998_0|py37ha711998_0|py36h6575580_0|py37ha711998_0|py36ha711998_0|py37h6575580_1|py36ha711998_1|py36h6575580_1|py36ha711998_0|py37ha711998_0|py27h6575580_0|py37h6575580_0|py37h6575580_1|py27ha711998_1|py36h6575580_1|py36ha711998_0|py37ha711998_0|py27h6575580_0|py37h6575580_0|py36ha711998_0|py37h6575580_0|py37ha711998_0|py36ha711998_0|py37ha711998_0|py37h6575580_0|py37ha711998_0|py36ha711998_0|py27h6575580_0|py37ha711998_0|py36ha711998_0|py27ha711998_0|py39hd037223_1|py39hd037223_3|py36hd037223_3|py37hd037223_3|py38hd037223_3|py39h4850c68_3|py36h4850c68_3|py36h68fea81_0|py38h68fea81_0|py37hcfb5961_0|py36hcfb5961_0|py37hcfb5961_0|py38h68fea81_0|py39h3a452eb_0|py39h585ceec_0|py37h3a452eb_0|py39h3a452eb_0|py38h585ceec_0|py37hbbe2e76_0|py38hbbe2e76_0|py38he0bd621_0|py37he0bd621_0|py37hbbe2e76_0|py38hbbe2e76_0|py37he0bd621_0|py38he0bd621_0|py37hbbe2e76_0|py37he0bd621_0|py38hbbe2e76_0|py310he490955_0|py310h4c014f7_0|py39he0bd621_0|py38he0bd621_0|py39hbbe2e76_0|py39he0bd621_0|py39hbbe2e76_0|py39he0bd621_0|py39hbbe2e76_0|py38h3a452eb_0|py37h585ceec_0|py39hde55871_0|py37h68fea81_0|py36h68fea81_0|py38hcfb5961_0|py38hcfb5961_0|py36hcfb5961_0|py37h68fea81_0|py37h4850c68_3|py38h4850c68_3|py39h7ca7dfd_1|py27h6575580_0|py38ha711998_0|py38h6575580_0|py37h6575580_0|py36h6575580_0|py37h6575580_0|py36h6575580_0|py27ha711998_0|py36h6575580_0|py27h6575580_0|py27ha711998_0|py36h6575580_0|py27h6575580_0|py27ha711998_0|py36h6575580_0|py27ha711998_0|py37ha711998_1|py27h6575580_1|py36ha711998_1|py36h6575580_0|py27ha711998_0|py27h6575580_1|py37ha711998_1|py27ha711998_1|py36h6575580_0|py27h6575580_0|py37h6575580_0|py27ha711998_0|py27h6575580_0|py37h6575580_0|py27ha711998_0|py36h8a80b8c_0|py27ha711998_0|py36h8a80b8c_1|py27h8a80b8c_1|py37ha711998_1|py27ha711998_1|py27ha711998_0|py35ha711998_0|py27ha711998_0|py36h42e5f7b_0|py35he97cb71_0|py36he97cb71_0|py37h6575580_5|py27h6575580_5|py27ha711998_5|py38ha711998_4|py27h8a80b8c_4|py36h8a80b8c_4|py37h8a80b8c_4|py37ha711998_4|py36ha711998_4|py35ha9ae307_4|py35h9797aa9_4|py37h9797aa9_4|py36h9797aa9_4|py36ha9ae307_4|py27h9797aa9_3|py36ha9ae307_3|py27ha9ae307_2|py36ha9ae307_2|py36h9797aa9_2|py37h9797aa9_2|py37h9797aa9_1|py36h9797aa9_1|py35ha9ae307_0|py27ha9ae307_0|py27h9797aa9_0|py36h9797aa9_0|py36ha9ae307_0|py35ha9ae307_0|py36h9797aa9_0|py27ha9ae307_0|py35h9797aa9_0|py27h9797aa9_0|py35h7ef55bc_1|py27h7ef55bc_1|py36h479e554_1|py36h6575580_0']The following specifications were found to be incompatible with your system:

  • feature:/osx-64::__osx==10.15.7=0
  • feature:|@/osx-64::__osx==10.15.7=0

Your installed version is: 10.15.7

Note that strict channel priority may have removed packages required for satisfiability.**

I need to run I3D on my own custom dataset, but I am stuck here.

multi GPU

if I want to use multi GPUs to train ,what should I do?

Data Preprocessing

Hi, thanks for your great work!
I noticed that you used the equation 2*(x/255) - 1 to make the image data in the range -1 to 1. And the official i3d seems contains no image preprocessing code although there is a issue discussing this question but it seems no one knows how to transform the demo images to match data in ".npy" file. So why to use this equation to transform the images? Can using this equation match data in ".npy" file?

UCF-101 test loop getting stuck -- Script is running but video is stuck

On running CUDA_VISIBLE_DEVICES=0 python finetune.py ucf101 flow 2 , the video got stuck in the process. I tried with test lists 1, 2, 3 but it's getting stuck at the different videos. It is working fine with the only optical flow but when ran only RGB or mixed loop just get stuck after some time. I suspect there is something with RGB code.
FYI: I tried with 2 different sets of RGB frames and the RBG list is valid enough.
image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.