Git Product home page Git Product logo

philferriere / tfoptflow Goto Github PK

View Code? Open in Web Editor NEW
522.0 21.0 135.0 184.8 MB

Optical Flow Prediction with TensorFlow. Implements "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume," by Deqing Sun et al. (CVPR 2018)

License: MIT License

Python 0.27% Jupyter Notebook 99.73%
optical-flow computer-vision cvpr2018 pwc-net tensorflow deep-learning motion-estimation mpi-sintel flying-chairs kitti-dataset

tfoptflow's Introduction

Optical Flow Prediction with Tensorflow

This repo provides a TensorFlow-based implementation of the wonderful paper "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume," by Deqing Sun et al. (CVPR 2018).

There are already a few attempts at implementing PWC-Net using TensorFlow out there. However, they either use outdated architectures of the paper's CNN networks, only provide TF inference (no TF training), only work on Linux platforms, and do not support multi-GPU training.

This implementation provides both TF-based training and inference. It is portable: because it doesn't use any dynamically loaded CUDA-based TensorFlow user ops, it works on Linux and Windows. It also supports multi-GPU training (the notebooks and results shown here were collected on a GTX 1080 Ti paired with a Titan X). The code also allows for mixed-precision training.

Finally, as shown in the "Links to pre-trained models" section, we achieve better results than the ones reported in the official paper on the challenging MPI-Sintel 'final' dataset.

Table of Contents

Background

The purpose of optical flow estimation is to generate a dense 2D real-valued (u,v vector) map of the motion occurring from one video frame to the next. This information can be very useful when trying to solve computer vision problems such as object tracking, action recognition, video object segmentation, etc.

Figure [2017a] (a) below shows training pairs (black and white frames 0 and 1) from the Middlebury Optical Flow dataset as well as the their color-coded optical flow ground truth. Figure (b) indicates the color coding used for easy visualization of the (u,v) flow fields. Usually, vector orientation is represented by color hue while vector length is encoded by color saturation:

The most common measures used to evaluate the quality of optical flow estimation are angular error (AE) and endpoint error (EPE). The angular error between two optical flow vectors (u0, v0) and (u1, v1) is defined as arccos((u0, v0) . (u1, v1)). The endpoint error measures the distance between the endpoints of two optical flow vectors (u0, v0) and (u1, v1) and is defined as sqrt((u0 - u1)2 + (v0 - v1)2).

Environment Setup

The code in this repo was developed and tested using Anaconda3 v.5.2.0. To reproduce our conda environment, please refer to the following files:

On Ubuntu:

On Windows:

Links to pre-trained models

Pre-trained models can be found here. They come in two flavors: "small" (sm, with 4,705,064 learned parameters) models don't use dense connections or residual connections, "large" (lg, with 14,079,050 learned parameters) models do. They are all built with a 6-level pyramid, upsampling level 2 by 4 in each dimension to generate the final prediction, and construct an 81-channel cost volume at each level from a search range (maximum displacement) of 4.

Please note that we trained these models using slightly different dataset and learning rate schedules. The official multistep schedule discussed in [2018a] is as follows: Slong 1.2M iters training, batch size 8 + Sfine 500k iters finetuning, batch size 4). Ours is Slong only, 1.2M iters, batch size 8, on a mix of FlyingChairs and FlyingThings3DHalfRes. FlyingThings3DHalfRes is our own version of FlyingThings3D where every input image pair and groundtruth flow has been downsampled by two in each dimension. We also use a different set of augmentation techniques.

Model performance

Model name Notebooks FlyingChairs (384x512) AEPE Sintel clean (436x1024) AEPE Sintel final (436x1024) AEPE
pwcnet-lg-6-2-multisteps-chairsthingsmix train 1.44 (notebook) 2.60 (notebook) 3.70 (notebook)
pwcnet-sm-6-2-multisteps-chairsthingsmix train 1.71 (notebook) 2.96 (notebook) 3.83 (notebook)

As a reference, here are the official, reported results:

Model inference times

We also measured the following MPI-Sintel (436 x 1024) inference times on a few GPUs:

Model name Titan X GTX 1080 GTX 1080 Ti
pwcnet-lg-6-2-cyclic-chairsthingsmix 90ms 81ms 68ms
pwcnet-sm-6-2-cyclic-chairsthingsmix 68.5ms 64.4ms 53.8ms

A few clarifications about the numbers above...

First, please note that this implementation is, by design, portable, i.e., it doesn't use any user-defined CUDA kernels whereas the official NVidia implementation does. Ours will work on any OS and any hardware configuration (even one without a GPU) that can run TensorFlow.

Second, the timing numbers we report are the inference times of the models trained on FlyingChairs and FlyingThings3DHalfRes. These are models that you can train longer if you want to, or finetune using an additional dataset, should you want to do so. In other words, these graphs haven't been frozen yet.

In a typical production environment, you would freeze the model after final training/finetuning and optimize the graph to whatever platform(s) you need to distribute them on using TensorFlow XLA or TensorRT. In that important context, the inference numbers we report on unoptimized graphs are rather meaningless.

PWC-Net

Basic Idea

Per [2018a], PWC Net improves on FlowNet2 [2016a] by adding domain knowledge into the design of the network. The basic idea behind optical flow estimation it that a pixel will retain most of its brightness over time despite a positional change from one frame to the next ("brightness" constancy). We can grab a small patch around a pixel in video frame 1 and find another small patch in video frame 2 that will maximize some function (e.g., normalized cross correlation) of the two patches. Sliding that patch over the entire frame 1, looking for a peak, generates what's called a cost volume (the C in PWC). This techniques is fairly robust (invariant to color change) but is expensive to compute. In some cases, you may need a fairly large patch to reduce the number of false positives in frame1, raising the complexity even more.

To alleviate the cost of generating the cost volume, the first optimization is to use pyramidal processing (the P in PWC). Using a lower resolution image lets you perform the search sliding a smaller patch from frame 1 over a smaller version of frame 2, yielding a smaller motion vector, then use that information as a hint to perform a more targeted search at the next level of resolution in the pyramid. That multiscale motion estimation can be performed in the image domain or in the feature domain (i.e., using the downscaled feature maps generated by a convnet). In practice, PWC warps (the W in PWC) frame 1 using an upsampled version of the motion flow estimated at a lower resolution because this will lead to searching for a smaller motion increment in the next higher resolution level of the pyramid (hence, allowing for a smaller search range). Here's a screenshot of a talk given by Deqing Sun that illustrates this process using a 2-level pyramid:

Note that none of the three optimizations used here (P/W/C) are unique to PWC-Net. These are techniques that were also used in SpyNet [2016b] and FlowNet2 [2016a]. However, here, they are used on the CNN features, rather than on an image pyramid:

The authors also acknowledge the fact that careful data augmentation (e.g., adding horizontal flipping) was necessary to reach best performance. To improve robustness, the authors also recommend training on multiple datasets (Sintel+KITTI+HD1K, for example) with careful class imbalance rebalancing.

Since this algorithm only works on two continuous frames at a time, it has the same limitations as methods that only use image pairs (instead of n frames with n>2). Namely, if an object moves out of frame, the predicted flow will likely have a large EPE. As the authors remark, techniques that use a larger number of frames can accommodate for this limitation by propagating motion information over time. The model also sometimes fails for small, fast moving objects.

Network

Here's a picture of the network architecture described in [2018a]:

Jupyter Notebooks

The recommended way to test this implementation is to use the following Jupyter notebooks:

Training

Multisteps learning rate schedule

Differently from the original paper, we do not train on FlyingChairs and FlyingThings3D sequentially (i.e, pre-train on FlyingChairs then finetune on FlyingThings3D). This is because the average flow magnitude on the MPI-Sintel dataset is only 13.5, while the average flow magnitudes on FlyingChairs and FlyingThings3D are 11.1 and 38, respectively. In our experiments, finetuning on FlyingThings3D would only yield worse results on MPI-Sintel.

We got more stable results by using a half-resolution version of the FlyingThings3D dataset with an average flow magnitude of 19, much closer to FlyingChairs and MPI-Sintel in that respect. We then trained on a mix of the FlyingChairs and FlyingThings3DHalfRes datasets. This mix, of course, could be extended with additional datasets.

Here are the training curves for the Slong training notebooks listed above:

Note that, if you click on the IMAGE tab in Tensorboard while running the training notebooks above, you will be able to visualize the progress of the training on a few validation samples (including the predicted flows at each pyramid level), as demonstrated here:

Cyclic learning rate schedule

If you don't want to use the long training schedule, but still would like to play with this code, try our very short cyclic learning rate schedule (100k iters, batch size 8). The results are nowhere near as good, but they allow for quick experimentation:

Model name Notebooks FlyingChairs (384x512) AEPE Sintel clean (436x1024) AEPE Sintel final (436x1024) AEPE
pwcnet-lg-6-2-cyclic-chairsthingsmix train 2.67 (notebook) 3.99 (notebook) 5.08 (notebook)
pwcnet-sm-6-2-cyclic-chairsthingsmix train 2.79 (notebook) 4.34 (notebook) 5.3 (notebook)

Below are the training curves for the Cyclicshort training notebooks:

Mixed-precision training

You can speed up training even further by using mixed-precision training. But, again, don't expect the same level of accuracy:

Model name Notebooks FlyingChairs (384x512) AEPE Sintel clean (436x1024) AEPE Sintel final (436x1024) AEPE
pwcnet-sm-6-2-cyclic-chairsthingsmix-fp16 train 2.47 (notebook) 3.77 (notebook) 4.90 (notebook)

Evaluation

As shown in the evaluation notebooks, and as expected, it becomes harder for the PWC-Net models to deliver accurate flow predictions if the average flow magnitude from one frame to the next is high:

It is especially hard for this -- and any other 2-frame based motion estimator! -- model to generate accurate predictions when picture elements simply disappear out-of-frame or suddenly fly-in:

Still, when the average motion is moderate, both the small and large models generate remarkable results:

Inference

There are two ways you can call the code provided here to generate flow predictions for your own dataset:

  • Pass a list of image pairs to a ModelPWCNet object using its predict_from_img_pairs() method
  • Pass an OpticalFlowDataset object to a ModelPWCNet object and call its predict() method

Running inference on image pairs

If you want to use a pre-trained PWC-Net model on your own set of images, you can pass a list of image pairs to a ModelPWCNet object using its predict_from_img_pairs() method, as demonstrated here:

from __future__ import absolute_import, division, print_function
from copy import deepcopy
from skimage.io import imread
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TEST_OPTIONS
from visualize import display_img_pairs_w_flows

# Build a list of image pairs to process
img_pairs = []
for pair in range(1, 4):
    image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_00{pair:02d}.png'
    image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_00{pair+1:02d}.png'
    image1, image2 = imread(image_path1), imread(image_path2)
    img_pairs.append((image1, image2))

# TODO: Set device to use for inference
# Here, we're using a GPU (use '/device:CPU:0' to run inference on the CPU)
gpu_devices = ['/device:GPU:0']  
controller = '/device:GPU:0'

# TODO: Set the path to the trained model (make sure you've downloaded it first from http://bit.ly/tfoptflow)
ckpt_path = './models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000'

# Configure the model for inference, starting with the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TEST_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_path'] = ckpt_path
nn_opts['batch_size'] = 1
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller

# We're running the PWC-Net-large model in quarter-resolution mode
# That is, with a 6 level pyramid, and upsampling of level 2 by 4 in each dimension as the final flow prediction
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2

# The size of the images in this dataset are not multiples of 64, while the model generates flows padded to multiples
# of 64. Hence, we need to crop the predicted flows to their original size
nn_opts['adapt_info'] = (1, 436, 1024, 2)

# Instantiate the model in inference mode and display the model configuration
nn = ModelPWCNet(mode='test', options=nn_opts)
nn.print_config()

# Generate the predictions and display them
pred_labels = nn.predict_from_img_pairs(img_pairs, batch_size=1, verbose=False)
display_img_pairs_w_flows(img_pairs, pred_labels)

The code above can be found in the pwcnet_predict_from_img_pairs.ipynb notebook and the pwcnet_predict_from_img_pairs.py script.

Running inference on the test split of a dataset

If you want to train a PWC-Net model from scratch, or finetune a pre-trained PWC-Net model using your own dataset, you will need to implement a dataset handler that derives from the OpticalFlowDataset base class in dataset_base.py.

We provide several dataset handlers for well known datasets, such as MPI-Sintel (dataset_mpisintel.py), FlyingChairs (dataset_flyingchairs.py), FlyingThings3D (dataset_flyingthings3d.py), and KITTI (dataset_kitti.py). Anyone of them is a good starting point to figure out how to implement your own.

Please note that that this is not complicated work; the derived class does little beyond telling the base class which list of files are to be used for training, validation, and testing, leaving the heavy lifting to the base class.

Once you have a data handler, you can pass it to a ModelPWCNet object and call its predict() method to generate flow predictions for its test split, as shown in the pwcnet_predict.ipynb notebook and the pwcnet_predict.py script.

Datasets

Datasets most commonly used for optical flow estimation include:

Additional optical flow datasets (not used here):

  • Middlebury Optical Flow [web]
  • Heidelberg HD1K Flow [web]

Per [2018a], KITTI and Sintel are currently the most challenging and widely-used benchmarks for optical flow. The KITTI benchmark is targeted at autonomous driving applications and its semi-dense ground truth is collected using LIDAR. The 2012 set only consists of static scenes. The 2015 set is extended to dynamic scenes via human annotations and more challenging to existing methods because of the large motion, severe illumination changes, and occlusions.

The Sintel benchmark is created using the open source graphics movie "Sintel" with two passes, clean and final. The final pass contains strong atmospheric effects, motion blur, and camera noise, which cause severe problems to existing methods.

References

2018

2017

  • [2017a] Baghaie et al. 2017. Dense Descriptors for Optical Flow Estimation: A Comparative Study. [web]

2016

2015

Acknowledgments

Other TensorFlow implementations we are indebted to:

@InProceedings{Sun2018PWC-Net,
  author    = {Deqing Sun and Xiaodong Yang and Ming-Yu Liu and Jan Kautz},
  title     = {{PWC-Net}: {CNNs} for Optical Flow Using Pyramid, Warping, and Cost Volume},
  booktitle = CVPR,
  year      = {2018},
}
@InProceedings\{DFIB15,
  author       = "A. Dosovitskiy and P. Fischer and E. Ilg and P. H{\"a}usser and C. Hazirbas and V. Golkov and P. v.d. Smagt and D. Cremers and T. Brox",
  title        = "FlowNet: Learning Optical Flow with Convolutional Networks",
  booktitle    = "IEEE International Conference on Computer Vision (ICCV)",
  month        = "Dec",
  year         = "2015",
  url          = "http://lmb.informatik.uni-freiburg.de//Publications/2015/DFIB15"
}

Contact Info

If you have any questions about this work, please feel free to contact us here:

https://www.linkedin.com/in/philferriere

tfoptflow's People

Contributors

philferriere avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfoptflow's Issues

The performance on KITTI

Hi๏ผŒThanks for your greate job ! Did you evaluate at KITTI dataset? Is as well as the orgin paper?Thanks for your atention.I hope you can reply soon

Next frame prediction

Thanks for Sharing your work. I would like to predict future frames based on optical flow.

Its about all sky Images, hemispherical fisheye view.

The idea is to predict solar irradiance for photovoltaic and csp for electrical grid stability.
(Balance)

image

Can you recommend me a pretrained Model for predicting frames in Future? ~15 Minutes?

Got some cash for Vertex AI

Kind Regards
Paul

Gradient updating in train_with_val mode.

Thanks for the excellent work!
I think in train function, loss and gradient should be updated by running self.y_hat_train_tnsr. However, in 'train_with_val' mode, it works like self.y_hat_val_tnsr = [self.loss_op, self.metric_op]. It seems self.optim_op is not run. I was wondering where the gradient updating is conducted?

Inference time on Titan X

Hello, with a TitanX I have an inference time of 1s instead of 0.1s. I used pwcnet_predict_from_img_pairs.py on a single image pair. Has anyone an idea ? Thank you for your attention

Level 6 features

I tried to extract features and got very interesting results.
Screenshot from 2019-06-24 15-03-25
As you can see level 6 features was constant. I used different pictures and checked all of 196 feature maps and always got the same result. I also checked different weights for different realization of pwc-net. Can you please describe this result.

Loss didn't decrease in pwcnet_train_sm-6-2-multisteps-chairsthingsmix

some output infos:
2019-10-31 21:27:13 Iter 49000 [Train]: loss=184.37, epe=15.17, lr=0.000100, samples/sec=24.4, sec/step=0.655, eta=8 days, 17:27:22
2019-10-31 21:27:21 Iter 49000 [Val]: loss=141.03, epe=11.58
Saving model...
INFO:tensorflow:./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-49000 is not in all_model_checkpoint_paths. Manually adding it.
... model saved in ./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-49000
2019-10-31 21:39:36 Iter 50000 [Train]: loss=184.73, epe=15.20, lr=0.000100, samples/sec=27.7, sec/step=0.578, eta=7 days, 16:41:55
2019-10-31 21:39:44 Iter 50000 [Val]: loss=140.98, epe=11.57
Saving model...
INFO:tensorflow:./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-50000 is not in all_model_checkpoint_paths. Manually adding it.
... model saved in ./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-50000
2019-10-31 21:50:47 Iter 51000 [Train]: loss=184.02, epe=15.14, lr=0.000100, samples/sec=28.3, sec/step=0.566, eta=7 days, 12:37:08
2019-10-31 21:50:56 Iter 51000 [Val]: loss=140.30, epe=11.52
Saving model...
INFO:tensorflow:./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-51000 is not in all_model_checkpoint_paths. Manually adding it.
... model saved in ./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-51000
2019-10-31 22:03:22 Iter 52000 [Train]: loss=184.69, epe=15.20, lr=0.000100, samples/sec=26.0, sec/step=0.616, eta=8 days, 4:35:21
2019-10-31 22:03:33 Iter 52000 [Val]: loss=141.20, epe=11.60
Saving model...
INFO:tensorflow:./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-52000 is not in all_model_checkpoint_paths. Manually adding it.
... model saved in ./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-52000
2019-10-31 22:16:15 Iter 53000 [Train]: loss=184.40, epe=15.17, lr=0.000100, samples/sec=25.4, sec/step=0.629, eta=8 days, 8:18:36
2019-10-31 22:16:25 Iter 53000 [Val]: loss=140.56, epe=11.53
Saving model...
INFO:tensorflow:./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-53000 is not in all_model_checkpoint_paths. Manually adding it.
... model saved in ./pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-53000

Run inference on Live feed / Webcam video

Hi,
I successfully ran the inference on a pair of frames (video), and I was wondering if it is possible or if anyone has ran the inference on real time live feed ? If yes, how were the results, and how many fps did you get ?

Thanks in advance !

Possibly wrong use of flow (Important!)

the use of dense_image_warp function is wrong. In line 194 of file core_warp.py, you use query_points_on_grid = batched_grid - flow but what this line does is a forward warp. But in model the warping image 2 to image 1 implies a backward warp. So, this should be query_points_on_grid = batched_grid + flow. I will try to explain my point. Say flow at point (i,j) is (fi,fj). This implies im2[i+fi,j+fj] = im1[i,j] i.e pixels i,j in im1 moved to i+fi and j+fj in im2. Now, when you warp im2 by flow, you aim to get moved pixels back to their corresponding place in im1, so (i',j') pixels in im1 must come from i'+fi and j+fj location in the im2, which is opposite of what this function is doing. Consider a black and white image(256*256) with a white background and single black dot at the centre. Suppose the flow for that dot is (5,5) . This implies im1[128,128] = 0;
and im2[133,133] = 0. Now say you are only given flow and im2 and you need to retrieve im1. So, clearly output[128,128] = im2[128+5, 128+5]. This is confusing and i hope this example makes it clear.

This is a core issue that should have impacted results, why this method trains so well despite this is beyond me.

Scaling the ground truth flow?

Hello Mr Ferriere,

thank you alot for sharing your tensorflow implementation of PWC Net. Currently I am using it as a starting point for my thesis. However I'm wondering about the scaling factors you used for the groundtruth/predicted flow and I think there might be a mistake in your implementation.

In the paper it reads:

"We scale the ground truth flow by 20 and downsample it to obtain the supervision signals at different levels. Note that we do not further scale the supervision signal at each level, the same as [15]. As a result, we need to scale the upsampled flow at each pyramid level for the warping layer. For example, at the second level, we scale the upsampled flow from the third level by a factor of 5 (= 20/4) before warping features of the second image."

For me this means the following two things:
First, if you devide the ground truth flow by 20, then the predicted flow (in each level) will be around 20 times too small. Therefore, to get the real flow values, you have to multiply the predicted flow by 20. Particularly, if you do some kind of warping operation, the predicted flow has to be rescaled in advance.
Secondly, in order to get the supervision signal for each level, you have to downsample the ground truth flow to the same height and width as the predicted flow. If you don't further scale the ground truth flow after downsampling (what is proposed by the paper), its magnitude will be too large and so will be the predicted flow at that level. That's why, before warping the feature maps, you have to divide the predicted flow by a factor of 2^lvl.
In your implementation you (correctly) account for that with the following lines:

scaler = 20. / 2**lvl  # scaler values are 0.625, 1.25, 2.5, 5.0
warp = self.warp(c2[lvl], up_flow * scaler, lvl)

But what about the supervision signal?
If I'm correct you would have to divide the ground truth flow by a factor of 20. Otherwise the magnitude of the predicted (learned) flow will be around 20 times too large after multiplying it with the "scaler". In this case the warping won't do what it should. Now I'm wondering where you downscale the ground truth flow by 20?
Additionaly, in your pwcnet_loss function you downsample and downscale the supervision signal.

scaled_flow_gt /= tf.cast(gt_height / lvl_height, dtype=tf.float32)
scaled_flow_gt = tf.image.resize_bilinear(y, (lvl_height, lvl_width))

So, in the second line you divide the magnitude of the ground truth flow by 2^lvl. As far as I can see it, this is not correct, if you also rescale the predicted flow by multiplying it with the "scaler" before the warping operation. To be more precise, because of your loss function, the network learns to predict a flow, which in each level is 2^lvl smaller than the original flow. It therefore already has the correct magnitude for the height/width of that level. When multiplying it with the scaler, you divide it again by 2^lvl. So the magnitude of the flow is too small and the warping will be wrong again.

I hope that my explanation is somewhat understandable. Thanks alot for taking some time to think about it and maybe share your thoughts on my points.

Best, Joshua

Docker

Any plans on releasing a docker image ?

Running code on our own data

Hi,

Great work! We would love it on our own video data. How would it be possible to do that? What are the best ways of going about it?

Thank you very much.

tfoptflow vs nvlabs realization

Hello, thank you for your work, I've decided to compare 2 implementations, yours and the official one, and I've got quite different results on Kitty's datset, you can comment on it somehow, thank you.
Screenshot from 2019-04-23 19-37-37
Screenshot from 2019-04-23 20-34-05

All pretrained models not available

Hi,

Some of the models seem to be missing from the link shared, for instance pwcnet-lg-6-2-cyclic-chairsthingsmix. Can you please update?

Thanks for great work btw

bias not found in checkpoint

model: models/pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-592000
gpu_devices = []
controller = '/device:CPU:0'
windows 8.1
python 3.6
tensorflow 1.13
running: pwcnet_predict_from_img_pairs.py

full error output:
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key pwcnet/ctxt/dc_conv31/bias not found in checkpoint
[[node save/RestoreV2 (defined at C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_base.py:119) ]]

Caused by op 'save/RestoreV2', defined at:
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\ptvsd_launcher.py", line 89, in
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\ptvsd\debugger.py", line 2631, in debug
exec_file(file, globals_obj)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\ptvsd\util.py", line 119, in exec_file
exec_code(code, file, global_variables)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\ptvsd\util.py", line 95, in exec_code
exec(code_obj, global_variables)
File "C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\pwcnet_predict_from_img_pairs.py", line 58, in
nn = ModelPWCNet(mode='test', options=nn_opts)
File "C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_pwcnet.py", line 231, in init
super().init(name, mode, session, options)
File "C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_base.py", line 66, in init
self.build_graph()
File "C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_base.py", line 266, in build_graph
self.init_saver()
File "C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_base.py", line 119, in init_saver
self.saver = tf.train.Saver()
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 832, in init
self.build()
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1655, in restore_v2
name=name)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*kwargs)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key pwcnet/ctxt/dc_conv31/bias not found in checkpoint
[[node save/RestoreV2 (defined at C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflow-master\tfoptflow\model_base.py:119) ]]

output until error:
C:\Users\BAndras\Anaconda3\lib\site-packages\h5py_init_.py:34: FutureWarning:
Conversion of the second argument of issubdtype from float to np.floating i
s deprecated. In future, it will be treated as np.float64 == np.dtype(float).ty pe.
from ._conv import register_converters as _register_converters
Building model...
WARNING:tensorflow:From C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflo
w-master\tfoptflow\model_pwcnet.py:1094: conv2d (from tensorflow.python.layers.c
onvolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow
python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.fr
amework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflo
w-master\tfoptflow\model_pwcnet.py:1221: conv2d_transpose (from tensorflow.pytho
n.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d_transpose instead.
... model built.
Loading model checkpoint c:/PROJECTS/SASMOB - hรญdas projekt/optical_flow/tfoptfl
ow-master/tfoptflow/models/pwcnet-sm-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-
592000 for eval or testing...

WARNING:tensorflow:From C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow
python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.trainin
g.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from c:/PROJECTS/SASMOB - hรญdas projekt/opt
ical_flow/tfoptflow-master/tfoptflow/models/pwcnet-sm-6-2-multisteps-chairsthing
smix/pwcnet.ckpt-592000
2019-03-28 12:13:28.455200: W tensorflow/core/framework/op_kernel.cc:1401] OP_RE
QUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key pwcnet/ctxt/dc_conv
31/bias not found in checkpoint

Backward Pass of Warping layer

Hi, thanks for your for implementing and sharing. I am wondering how the backward pass of warping layer working. As far as I am concerned, Floor() operation is nondifferentiable. Do we need to use gradient_override_map to substitute gradient of Identity op for Floor ?

A question about FlyingThings3D

Hi, Phil !
This project is a great work. I've learned a lot from it. Few days ago, I tested it in the FlyingChairs, it works well. Yesterday I downloaded the FlyingThings and modified my train.py according to pwcnet_train_sm-6-2-multisteps-chairsthingsmix.ipynb. When I run it, a problem has arisen.

server1@server1-All-Series:/disk_2t/pwc_f$ python train_mix.py Traceback (most recent call last): File "train_mix.py", line 36, in <module> ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts) File "/disk_2t/pwc_f/dataset_flyingthings3d.py", line 155, in __init__ super().__init__(mode, ds_root, options) File "/disk_2t/pwc_f/dataset_flyingthings3d.py", line 41, in __init__ super().__init__(mode, ds_root, options) File "/disk_2t/pwc_f/dataset_base.py", line 128, in __init__ self.prepare() File "/disk_2t/pwc_f/dataset_base.py", line 203, in prepare self._build_ID_sets() File "/disk_2t/pwc_f/dataset_flyingthings3d.py", line 188, in _build_ID_sets if self.generate_files is True: AttributeError: 'FlyingThings3DHalfResDataset' object has no attribute 'generate_files'

I think it is caused by the wrong path of dataset. My path looks like this.

server1@server1-All-Series:/disk_2t/dataset$ ls
FlyingChairs_release FlyingThings3D_HalfRes

server1@server1-All-Series:/disk_2t/dataset/FlyingThings3D_HalfRes$ ls
all_unused_files.txt frames_cleanpass optical_flow

and in train.py, I define the path like this.

_DATASET_ROOT = '/disk_2t/dataset/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'

Could you tell me what I did wrong?
Thank you!

Question about the shape of Sintel dataset

In your notebooks, you said that

The size of the images in this dataset are not multiples of 64, while the model generates flows padded to multiples of 64. Hence, we need to crop the predicted flows to their original size.

May I ask why you take this strategy instead of resizing the image size to 448x1024 and have you tried to resize?

Can't setup on Windows

when I try to use conda command to setup the environment, something wrong indicates as following:

image

then I deupgrade the libtiff version, nothing is changed.

The constant flow output

Hello
Thanks for your informative implementation.
However, after few epochs, the final flow looks very much constant, Did you have a same issue?

Optical flow color code

Dear authors,
thank you for your great work, it is very useful for me!
I found an issue on flow visualization. I found that the color code is correct in most cases, but when there is only vertical motion moving downwards, the color displayed is green and not yellow as depicted in the colorwheel. I have further tested this situation with the middlebury original code and I can confirm that the matlab script (http://vision.middlebury.edu/flow/submit/) produces yellow output for objects moving downwards.

Could you help me find the reason for that?

In addition, when testing on sintel I can see that the groundtruth visualization slighlty differs from yours (both if normalized= True of False, false gives worse results)

Regards,
Stefano

Attach an example on sintel:

Screenshot 2019-06-13 at 19 15 45

your script with flo_v =100
Screenshot 2019-06-13 at 19 16 17

matlab script with flo_v =100

Screenshot 2019-06-13 at 19 16 32

question about def deconv in model_pwcnet.py

In general, we use odd kernel size in conv or deconv, def deonv(line 1182) in model_pwcnet.py , I think the kernel_size=3,not 4.
return tf.layers.conv2d_transpose(x, 2, 4, 2, 'same', name=op_name)

ValueError: shape of x_tnsr:0

ValueError: Cannot feed value of shape (1, 2, 448, 1024, 3) for Tensor 'x_tnsr:0', which has shape '(0, 2, ?, ?, 3)'
model: models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000
gpu_devices = []
controller = '/device:CPU:0'
windows 8.1
python 3.6
tensorflow 1.13
running: pwcnet_predict_from_img_pairs.py

place of error:
pwcnet_predict_from_img_pairs.py
62. row: pred_labels = nn.predict_from_img_pairs(img_pairs, batch_size=1, verbose=False)

model_pwcnet.py
1000. row: y_hat = self.sess.run(self.y_hat_test_tnsr, feed_dict=feed_dict)

output until error:
C:\Users\BAndras\Anaconda3\lib\site-packages\h5py_init_.py:34: FutureWarning:
Conversion of the second argument of issubdtype from float to np.floating i
s deprecated. In future, it will be treated as np.float64 == np.dtype(float).ty pe.
from ._conv import register_converters as _register_converters
Building model...
WARNING:tensorflow:From C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflo
w-master\tfoptflow\model_pwcnet.py:1094: conv2d (from tensorflow.python.layers.c
onvolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow
python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.fr
amework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\PROJECTS\SASMOB - hรญdas projekt\optical_flow\tfoptflo
w-master\tfoptflow\model_pwcnet.py:1221: conv2d_transpose (from tensorflow.pytho
n.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d_transpose instead.
... model built.
Loading model checkpoint c:/PROJECTS/SASMOB - hรญdas projekt/optical_flow/tfoptfl
ow-master/tfoptflow/models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-
595000 for eval or testing...

WARNING:tensorflow:From C:\Users\BAndras\Anaconda3\lib\site-packages\tensorflow
python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.trainin
g.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from c:/PROJECTS/SASMOB - hรญdas projekt/opt
ical_flow/tfoptflow-master/tfoptflow/models/pwcnet-lg-6-2-multisteps-chairsthing
smix/pwcnet.ckpt-595000
... model loaded

Model Configuration:
verbose True
ckpt_path c:/PROJECTS/SASMOB - hรญdas projekt/optical_flow/tfoptfl
ow-master/tfoptflow/models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-
595000
x_dtype <dtype: 'float32'>
x_shape [2, None, None, 3]
y_dtype <dtype: 'float32'>
y_shape [None, None, 2]
gpu_devices []
controller /device:CPU:0
batch_size 1
use_tf_data True
use_mixed_precision False
pyr_lvls 6
flow_pred_lvl 2
search_range 4
use_dense_cx True
use_res_cx True
adapt_info (1, 436, 1024, 2)
mode test
trainable params 14079050

OutOfRangeError when running demo of "inference on image pairs"

Hi, when I try to run the code pwcnet_predict_from_img_pairs.ipynb without any changes using the original data samples on Ubuntu18.04, it has error when I execute nn = ModelPWCNet(mode='test', options=nn_opts). Could someone help me? Thank you!

OutOfRangeError

This is the error information:

Building model...

WARNING:tensorflow:From /is/sg2/jjiang/tfoptflow/tfoptflow/model_pwcnet.py:1094: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /is/sg2/jjiang/tfoptflow/tfoptflow/model_pwcnet.py:1221: conv2d_transpose (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d_transpose instead.
... model built.
Loading model checkpoint ./models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000 for eval or testing...

WARNING:tensorflow:From /is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.

Instructions for updating:

Use standard file APIs to check for files with this prefix.

INFO:tensorflow:Restoring parameters from ./models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000


OutOfRangeError Traceback (most recent call last)
~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1333 try:
-> 1334 return fn(*args)
1335 except errors.OpError as e:

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1318 return self._call_tf_sessionrun(
-> 1319 options, feed_dict, fetch_list, target_list, run_metadata)
1320

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1406 self._session, options, feed_dict, fetch_list, target_list,
-> 1407 run_metadata)
1408

OutOfRangeError: Read less bytes than requested
[[{{node save/RestoreV2}}]]

During handling of the above exception, another exception occurred:

OutOfRangeError Traceback (most recent call last)
in
1 # Instantiate the model in inference mode and display the model configuration
2 # nn = ModelPWCNet(mode='test', options=nn_opts)
----> 3 nn = ModelPWCNet(mode='test', options=nn_opts)

~/tfoptflow/tfoptflow/model_pwcnet.py in init(self, name, mode, session, options, dataset)
229 Main results".
230 """
--> 231 super().init(name, mode, session, options)
232 self.ds = dataset
233 # self.adapt_infos = []

~/tfoptflow/tfoptflow/model_base.py in init(self, name, mode, session, options)
64
65 # Build the TF graph
---> 66 self.build_graph()
67
68 ###

~/tfoptflow/tfoptflow/model_base.py in build_graph(self)
265 # Init saver (override if you wish) and load checkpoint if it exists
266 self.init_saver()
--> 267 self.load_ckpt()
268
269 ###

~/tfoptflow/tfoptflow/model_base.py in load_ckpt(self)
185 if self.opts['verbose']:
186 print(f"Loading model checkpoint {self.last_ckpt} for eval or testing...\n")
--> 187 self.saver.restore(self.sess, self.last_ckpt)
188 if self.opts['verbose']:
189 print("... model loaded")

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
1274 else:
1275 sess.run(self.saver_def.restore_op_name,
-> 1276 {self.saver_def.filename_tensor_name: save_path})
1277 except errors.NotFoundError as err:
1278 # There are three common conditions that might cause this error:

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
927 try:
928 result = self._run(None, fetches, feed_dict, options_ptr,
--> 929 run_metadata_ptr)
930 if run_metadata:
931 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1150 if final_fetches or final_targets or (handle and feed_dict_tensor):
1151 results = self._do_run(handle, final_targets, final_fetches,
-> 1152 feed_dict_tensor, options, run_metadata)
1153 else:
1154 results = []

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1326 if handle is None:
1327 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1328 run_metadata)
1329 else:
1330 return self._do_call(_prun_fn, handle, feeds, fetches)

~/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1346 pass
1347 message = error_interpolation.interpolate(message, self._graph)
-> 1348 raise type(e)(node_def, op, message)
1349
1350 def _extend_graph(self):

OutOfRangeError: Read less bytes than requested
[[node save/RestoreV2 (defined at /is/sg2/jjiang/tfoptflow/tfoptflow/model_base.py:119) ]]

Caused by op 'save/RestoreV2', defined at:
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/asyncio/base_events.py", line 539, in run_forever
self._run_once()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/asyncio/base_events.py", line 1775, in _run_once
handle._run()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 690, in
lambda f: self._run_callback(functools.partial(callback, future))
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 781, in inner
self.run()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 742, in run
yielded = self.gen.send(value)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2848, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2874, in _run_cell
return runner(coro)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3049, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3214, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 3, in
nn = ModelPWCNet(mode='test', options=nn_opts)
File "/is/sg2/jjiang/tfoptflow/tfoptflow/model_pwcnet.py", line 231, in init
super().init(name, mode, session, options)
File "/is/sg2/jjiang/tfoptflow/tfoptflow/model_base.py", line 66, in init
self.build_graph()
File "/is/sg2/jjiang/tfoptflow/tfoptflow/model_base.py", line 266, in build_graph
self.init_saver()
File "/is/sg2/jjiang/tfoptflow/tfoptflow/model_base.py", line 119, in init_saver
self.saver = tf.train.Saver()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 832, in init
self.build()
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
restore_sequentially, reshape)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
name=name)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/is/sg2/jjiang/Software/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

OutOfRangeError (see above for traceback): Read less bytes than requested
[[node save/RestoreV2 (defined at /is/sg2/jjiang/tfoptflow/tfoptflow/model_base.py:119) ]]

About the inference time

An impressive work. I have some questions about inference time. It seems to be a lot slower than official caffe version. Do you consider the time of data loading? Or tensorflow is so slower than caffe!

Unsupervised training

Hello Phil,

thank you for this very nice implementation of PWC-Net.

I have a question about unsupervised training using image pairs only. If I wanted to implement this, is there anything else besides the loss function I need to modify in the source code in order to make it work or do you think a proper loss formulation should be sufficient?

Thank you.

Kind regards,

Martin

scale the ground truth flow by 20

Thank you for the awesome work.

I have a question:
The authors scaled the ground truth flow by 20 in their paper.
However, I didn't find the scale operation in this project? Am I missing anything?

Thank you.

some problems on data augmentation

Hi:
Thanks for your great job. While I reading the code,I found that data augmentations used in the code were listed as follows:
1.Horizontally flip 50% of images
2.Vertically flip 50% of images
3.Translate 50% of images by a value between -5 and +5 percent of original size on x- and y-axis independently
4.Scale 50% of images by a factor between 95 and 105 percent of original size
But in the original FlowNet ,the kinds of augmentation were more than above and a little different.
1.Translate all of images by a value between -20 and +20 percent of original size on x- and y-axis independently
2.Scale all of images by a factor between 90 and 200 percent of original size
3.No horizontal flip and vertically flip is used
4.Add the Gaussian noise that has a sigma uniformly sampled from [0, 0.04]
5.Add the contrast sampled within [โˆ’0.8, 0.4]
6.Multiplicative color changes to the RGB channels per image from [0.5, 2]
7.gamma values from [0.7, 1.5] and additive brightness changes using Gaussian with a sigma of 0.2.
Will the network trained using the above methods be different?
Expecting your reply. Thanks in advance!

Conda env confilct Linux

tried to install the conda environment, but resulted in conflict.

operating system:
Ubuntu 18.04.3 LTS

conda version:
conda 4.7.12

conda env create -f ./dlubu36.yml       
Collecting package metadata (repodata.json): done
Solving environment: / 
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed                                                                                                                                                                                                             
                                                                                                                                                                                                                   
UnsatisfiableError: The following specifications were found to be incompatible with each other:                                                                                                                    



Package numpy conflicts for:
h5py==2.8.0=py36h989c5e5_3 -> numpy[version='>=1.11.3,<2.0a0']
scikit-image==0.14.0=py36hfc679d8_1 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy[version='1.11.*|1.12.*|1.13.*|>=1.11|>=1.14.6,<2.0a0|>=1.15.1,<2.0a0|>=1.8|>=1.9|>=1.9.3,<2.0a0']
pywavelets==1.0.0=py36h7eb728f_0 -> numpy[version='>=1.9.3,<2.0a0']
mkl_random==1.0.1=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0']
numpy==1.15.1=py36h1d66e8a_0
matplotlib==2.2.3=py36hb69df0a_0 -> numpy
scikit-learn==0.19.1=py36hedc7406_0 -> numpy[version='>=1.11.3,<2.0a0']
scipy==1.1.0=py36hfa4b5c9_1 -> numpy[version='>=1.15.1,<2.0a0']
patsy==0.5.0=py36_0 -> numpy[version='>=1.4.0']
scikit-image==0.14.0=py36hfc679d8_1 -> numpy[version='>=1.11.3,<2.0a0']
imageio==2.3.0=py_1 -> numpy
mkl_fft==1.0.4=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0']
seaborn==0.9.0=py36_0 -> numpy[version='>=1.9.3']
statsmodels==0.9.0=py36h035aef0_0 -> numpy[version='>=1.11.3,<2.0a0']
pandas==0.23.4=py36h04863e7_0 -> numpy[version='>=1.11.3,<2.0a0']
seaborn==0.9.0=py36_0 -> numpy[version='>=1.9.3'] -> numpy[version='1.10.*|1.11.*|1.12.*|1.13.*|>=1.11|>=1.11.*|>=1.11.3,<2.0a0|>=1.12.1,<2.0a0|>=1.13.3,<2.0a0|>=1.14.6,<2.0a0|>=1.15.1,<2.0a0|>=1.9|>=1.9.3,<2.0a0']
Package mkl-service conflicts for:
imageio==2.3.0=py_1 -> numpy -> mkl-service[version='>=2,<3.0a0']
mkl-service==1.1.2=py36h651fb7a_4
scikit-image==0.14.0=py36hfc679d8_1 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl-service[version='>=2,<3.0a0']
matplotlib==2.2.3=py36hb69df0a_0 -> numpy -> mkl-service[version='>=2,<3.0a0']
scipy==1.1.0=py36hfa4b5c9_1 -> numpy[version='>=1.15.1,<2.0a0'] -> mkl_fft[version='>=1.0.4'] -> mkl-service[version='>=2,<3.0a0']
pywavelets==1.0.0=py36h7eb728f_0 -> numpy[version='>=1.9.3,<2.0a0'] -> mkl-service[version='>=2,<3.0a0']
mkl_fft==1.0.4=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base==1.15.0=py36h7cdd4dd_0 -> mkl-service[version='>=2,<3.0a0']
scikit-learn==0.19.1=py36hedc7406_0 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base==1.15.0=py36h7cdd4dd_0 -> mkl-service[version='>=2,<3.0a0']
mkl_random==1.0.1=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base==1.15.0=py36h7cdd4dd_0 -> mkl-service[version='>=2,<3.0a0']
patsy==0.5.0=py36_0 -> numpy[version='>=1.4.0'] -> mkl-service[version='>=2,<3.0a0']
statsmodels==0.9.0=py36h035aef0_0 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl-service[version='>=2,<3.0a0']
h5py==2.8.0=py36h989c5e5_3 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl-service[version='>=2,<3.0a0']
seaborn==0.9.0=py36_0 -> numpy[version='>=1.9.3'] -> mkl-service[version='>=2,<3.0a0']
pandas==0.23.4=py36h04863e7_0 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl-service[version='>=2,<3.0a0']
Package sip conflicts for:
jupyter==1.0.0=py36_4 -> qtconsole -> pyqt -> sip[version='4.18|4.18.*|>=4.19.4,<=4.19.8']
pyqt==5.9.2=py36h22d08a2_1 -> sip[version='>=4.19.4,<=4.19.8']
qtconsole==4.4.1=py36_0 -> pyqt[version='>=5.9.2,<5.10.0a0'] -> sip[version='>=4.19.4,<=4.19.8']
seaborn==0.9.0=py36_0 -> matplotlib[version='>=1.4.3'] -> pyqt=5.9 -> sip[version='4.18|4.18.*|>=4.19.4,<=4.19.8']
matplotlib==2.2.3=py36hb69df0a_0 -> pyqt=5.9 -> sip[version='>=4.19.4,<=4.19.8']
scikit-image==0.14.0=py36hfc679d8_1 -> matplotlib[version='>=2.0.0'] -> pyqt=5.9 -> sip[version='4.18|4.18.*|>=4.19.4,<=4.19.8']
sip==4.19.12=py36he6710b0_0
Package numpy-base conflicts for:
pandas==0.23.4=py36h04863e7_0 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_fft[version='>=1.0.4'] -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
matplotlib==2.2.3=py36hb69df0a_0 -> numpy -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py36hdbf6ddf_6|py36h2b20989_7|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2b20989_6|py36hdbf6ddf_7']
statsmodels==0.9.0=py36h035aef0_0 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5',build='py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0']
matplotlib==2.2.3=py36hb69df0a_0 -> numpy -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
scipy==1.1.0=py36hfa4b5c9_1 -> numpy[version='>=1.15.1,<2.0a0'] -> numpy-base[version='1.15.1|1.15.2|1.15.2|1.15.3|1.15.4',build='py36h81de0dd_0|py36h81de0dd_1|py36h81de0dd_0|py36h81de0dd_0|py36h74e8950_0|py36h81de0dd_0']
h5py==2.8.0=py36h989c5e5_3 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5',build='py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0']
mkl_fft==1.0.4=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0']
mkl_random==1.0.1=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_fft[version='>=1.0.4'] -> numpy-base[version='>=1.0.6,<2.0a0']
numpy==1.15.1=py36h1d66e8a_0 -> numpy-base==1.15.1=py36h81de0dd_0
scipy==1.1.0=py36hfa4b5c9_1 -> numpy[version='>=1.15.1,<2.0a0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
patsy==0.5.0=py36_0 -> numpy[version='>=1.4.0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
seaborn==0.9.0=py36_0 -> numpy[version='>=1.9.3'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
pandas==0.23.4=py36h04863e7_0 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5',build='py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0']
h5py==2.8.0=py36h989c5e5_3 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_fft[version='>=1.0.4'] -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
imageio==2.3.0=py_1 -> numpy -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37hdbf6ddf_7|py36hdbf6ddf_7|py35hdbf6ddf_7|py35h2b20989_7|py27h2b20989_7|py37hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py27h2f8d375_0|py37hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py27hde5b4d6_0|py27h2f8d375_0|py37hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py37hde5b4d6_1|py37hde5b4d6_0|py36hde5b4d6_1|py36hde5b4d6_0|py36h2f8d375_1|py36h2f8d375_0|py27hde5b4d6_0|py27h2f8d375_1|py37hde5b4d6_1|py37hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36h2f8d375_1|py27hde5b4d6_1|py27hde5b4d6_0|py27h2f8d375_0|py37hde5b4d6_0|py37h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py27hde5b4d6_0|py27h81de0dd_0|py37h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py27h2f8d375_0|py37h81de0dd_1|py36h81de0dd_0|py36h2f8d375_1|py27h81de0dd_1|py27h81de0dd_0|py27h2f8d375_1|py27h2f8d375_0|py37h74e8950_0|py37h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py35h81de0dd_0|py35h74e8950_0|py27h81de0dd_0|py27h74e8950_0|py35h7cdd4dd_0|py35h3dfced4_0|py37hde5b4d6_5|py36h81de0dd_4|py36h2f8d375_5|py36h2f8d375_4|py35h81de0dd_4|py27hde5b4d6_5|py37hdbf6ddf_2|py37h2b20989_4|py37h2b20989_3|py37h2b20989_2|py36hdbf6ddf_3|py36hdbf6ddf_1|py36h2b20989_3|py27hdbf6ddf_3|py27hdbf6ddf_0|py27h2b20989_4|py27h2b20989_0|py36h2b20989_0|py35hdbf6ddf_0|py35h2b20989_0|py27h2b20989_0|py36h0ea5e3f_1|py35h9be14a7_1|py35h0ea5e3f_1|py27h9be14a7_1|py38hde5b4d6_12|py37hde5b4d6_12|py37h81de0dd_9|py37h7cdd4dd_9|py37h2f8d375_12|py37h2f8d375_10|py37h2b20989_8|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_7|py36h2f8d375_12|py36h2f8d375_11|py36h2f8d375_10|py35hdbf6ddf_8|py35h74e8950_10|py27hde5b4d6_12|py27hdbf6ddf_7|py27h3dfced4_9|py27h2f8d375_12|py27h2f8d375_11|py27h2f8d375_10|py27h2b20989_7|py27h2b20989_8|py27h74e8950_10|py27h74e8950_9|py27h7cdd4dd_9|py27h81de0dd_10|py27h81de0dd_9|py27hdbf6ddf_8|py27hde5b4d6_11|py35h2b20989_8|py35h2f8d375_10|py35h3dfced4_9|py35h74e8950_9|py35h7cdd4dd_9|py35h81de0dd_10|py35h81de0dd_9|py36h2b20989_7|py36h2b20989_8|py36h3dfced4_9|py36h74e8950_10|py36h74e8950_9|py36h7cdd4dd_9|py36h81de0dd_10|py36h81de0dd_9|py36hdbf6ddf_8|py37h2b20989_7|py37h2f8d375_11|py37h3dfced4_9|py37h74e8950_10|py37h74e8950_9|py37h81de0dd_10|py37hdbf6ddf_7|py37hdbf6ddf_8|py37hde5b4d6_11|py38h2f8d375_12|py27h0ea5e3f_1|py36h9be14a7_1|py27hdbf6ddf_0|py36hdbf6ddf_0|py27h2b20989_1|py27h2b20989_2|py27h2b20989_3|py27hdbf6ddf_1|py27hdbf6ddf_2|py27hdbf6ddf_4|py35h2b20989_4|py35hdbf6ddf_0|py35hdbf6ddf_4|py36h2b20989_0|py36h2b20989_1|py36h2b20989_2|py36h2b20989_4|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_4|py37h2b20989_1|py37hdbf6ddf_1|py37hdbf6ddf_3|py37hdbf6ddf_4|py27h2f8d375_4|py27h2f8d375_5|py27h81de0dd_4|py35h2f8d375_4|py36hde5b4d6_5|py37h2f8d375_4|py37h2f8d375_5|py37h81de0dd_4|py27h3dfced4_0|py27h7cdd4dd_0|py36h3dfced4_0|py36h7cdd4dd_0|py37h3dfced4_0|py37h7cdd4dd_0|py27h2f8d375_0|py35h2f8d375_0|py36h74e8950_0|py37h81de0dd_0|py35h2f8d375_0|py35h81de0dd_0|py36h2f8d375_0|py36h81de0dd_1|py37h2f8d375_0|py37h2f8d375_1|py37h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36hde5b4d6_0|py37h81de0dd_0|py27h2f8d375_1|py36h2f8d375_0|py37h2f8d375_1|py27h2f8d375_0|py27hde5b4d6_1|py37h2f8d375_0|py37h2f8d375_1|py27h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_0|py37h2f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py27hde5b4d6_0|py37h2f8d375_0|py27h2b20989_6|py27hdbf6ddf_6|py27hdbf6ddf_7|py36h2b20989_6|py36h2b20989_7|py36hdbf6ddf_6|py37h2b20989_6|py37h2b20989_7|py37hdbf6ddf_6']
pywavelets==1.0.0=py36h7eb728f_0 -> numpy[version='>=1.9.3,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py36hdbf6ddf_6|py36h2b20989_7|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2b20989_6|py36hdbf6ddf_7']
scikit-learn==0.19.1=py36hedc7406_0 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.2|1.15.2|1.15.3|1.15.4',build='py36h81de0dd_0|py36h81de0dd_0|py36h81de0dd_0|py36h2f8d375_5|py36h2f8d375_4|py36hdbf6ddf_2|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_3|py36h2b20989_2|py36h2b20989_0|py36h9be14a7_1|py36hdbf6ddf_8|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h3dfced4_9|py36h81de0dd_10|py36h81de0dd_9|py36hdbf6ddf_7|py36h0ea5e3f_1|py36hdbf6ddf_0|py36h2b20989_0|py36h2b20989_1|py36hdbf6ddf_0|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h7cdd4dd_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1']
mkl_random==1.0.1=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.2|1.15.2|1.15.3|1.15.4',build='py36h81de0dd_0|py36h81de0dd_0|py36h81de0dd_0|py36h2f8d375_5|py36h2f8d375_4|py36hdbf6ddf_2|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_3|py36h2b20989_2|py36h2b20989_0|py36h9be14a7_1|py36hdbf6ddf_8|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h3dfced4_9|py36h81de0dd_10|py36h81de0dd_9|py36hdbf6ddf_7|py36h0ea5e3f_1|py36hdbf6ddf_0|py36h2b20989_0|py36h2b20989_1|py36hdbf6ddf_0|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h7cdd4dd_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1']
imageio==2.3.0=py_1 -> numpy -> mkl_random[version='>=1.0.2,<2.0a0'] -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
scikit-learn==0.19.1=py36hedc7406_0 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_fft[version='>=1.0.4'] -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
patsy==0.5.0=py36_0 -> numpy[version='>=1.4.0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.17.2.*|1.17.3.*|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py36hdbf6ddf_6|py36h2b20989_7|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|py36hde5b4d6_0|py36hde5b4d6_1|py36h81de0dd_0|py36h2f8d375_1|py36h2f8d375_0|py36h81de0dd_0|py36h7cdd4dd_0|py36h2f8d375_5|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_2|py36h2b20989_0|py36hde5b4d6_12|py36hde5b4d6_11|py36hdbf6ddf_8|py36hdbf6ddf_7|py36h81de0dd_10|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h2f8d375_10|py36h2f8d375_11|py36h2f8d375_12|py36h3dfced4_9|py36h74e8950_10|py36h81de0dd_9|py36h0ea5e3f_1|py36h9be14a7_1|py36h2b20989_0|py36hdbf6ddf_0|py36h2b20989_1|py36h2b20989_3|py36hdbf6ddf_0|py36hdbf6ddf_2|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h2f8d375_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h2f8d375_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py36h2f8d375_0|py36h81de0dd_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36h2f8d375_0|py36h2f8d375_1|py36hde5b4d6_0|py36hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py36h2b20989_6|py36hdbf6ddf_7']
statsmodels==0.9.0=py36h035aef0_0 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
pywavelets==1.0.0=py36h7eb728f_0 -> numpy[version='>=1.9.3,<2.0a0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']
mkl_fft==1.0.4=py36h4414c95_1 -> numpy[version='>=1.11.3,<2.0a0'] -> numpy-base[version='1.11.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.1|1.15.1|1.15.2|1.15.2|1.15.3|1.15.4',build='py36h81de0dd_0|py36h81de0dd_0|py36h81de0dd_0|py36h2f8d375_5|py36h2f8d375_4|py36hdbf6ddf_2|py36hdbf6ddf_1|py36h2b20989_4|py36h2b20989_3|py36h2b20989_2|py36h2b20989_0|py36h9be14a7_1|py36hdbf6ddf_8|py36h7cdd4dd_9|py36h74e8950_9|py36h2b20989_8|py36h2b20989_7|py36h3dfced4_9|py36h81de0dd_10|py36h81de0dd_9|py36hdbf6ddf_7|py36h0ea5e3f_1|py36hdbf6ddf_0|py36h2b20989_0|py36h2b20989_1|py36hdbf6ddf_0|py36hdbf6ddf_3|py36hdbf6ddf_4|py36h81de0dd_4|py36hde5b4d6_5|py36h3dfced4_0|py36h7cdd4dd_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1']
scikit-image==0.14.0=py36hfc679d8_1 -> numpy[version='>=1.11.3,<2.0a0'] -> mkl_random -> numpy-base[version='>=1.0.2,<2.0a0|>=1.0.6,<2.0a0']

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.