Git Product home page Git Product logo

chainer-fast-neuralstyle's People

Contributors

6o6o avatar hiyorimi avatar shinichy avatar soralab avatar vermapratyush avatar yusuketomoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chainer-fast-neuralstyle's Issues

Attempting to re-create starrynight.model

Hi guys, big thanks to the authors for making this available - @yusuketomoto

I'm having a hard time getting any sort of useful result.

I decided to try and rebuild a model file similar to the one that's bundled with code.

First of all, when running the trainer, with

python train.py -s sample_images/style.jpg -d models

it outputs

num traning images: 0

but trains and outputs the model after 2 default epochs. I looked in the code and saw that it searches --dataset / -d directory for .jpg or .png files. This is confusing. I thought that directory was used for output? So I placed style.jpg into both /models and /sample_images

So, trying to get it to run, I made a file I can tweak and run from command line. I copied all the default param values from train.py as parameters for tweaking. No matter what I change the values to, the result is always complete garbage.

python train.py \
--style_image sample_images/style.jpg \
--dataset models \
--batchsize 1 \
--lambda_tv 10e-4 \
--lambda_feat 1e0 \
--lambda_style 1e1 \
--epoch 2 \
--lr 1e-3   

python generate.py \
sample_images/tubingen.jpg \
-m models/out.model \
-o sample_images/output.jpg

Output:
output

Has anyone been able to get this to work? What params should I use to re-create starrynight model?

I would appreciate any feedback.

Questions when implementing parallel computation on multiple GPUs

Thanks for the great code. I'm working on implementing parallel computing on multiple GPUs in order to accelerate the training process. I have read the document in http://docs.chainer.org/en/stable/tutorial/gpu.html and known that there are Model-parallel Computation and Data-parallel Computation. But I don't know which part of code I can modify since the code is quite different. Could anyone give me some guidance? Thanks ahead of time!

GPU slow initialize

Hello,
When I run generate.py with GPU it say that it is completed in 1.05363 sec but it takes around 4 sec. As I understand there is GPU initialize duration about 3 sec. If I won't kill python code and I make another call to gpu in same python script it returns around 1 sec. It is so fast but owing to sequential calls, memory is stack over flow. Do you have any idea in order to decrease initialize duration of gpu?

train dataset is OK?

I get http://msvocds.blob.core.windows.net/coco2014/train2014.zip
unzip train2014.zip
then

suker@suker:~/chainer-fast-neuralstyle$ python train.py -s rio.jpg -d ./train2014 -g 0
/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py:1298: UserWarning: /home/gtx1080/.python-eggs is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable).
warnings.warn(msg, UserWarning)
num traning images: 82783
82783 iterations, 2 epochs
/usr/local/lib/python2.7/dist-packages/chainer-1.14.0-py2.7-linux-x86_64.egg/chainer/cuda.py:87: UserWarning: cuDNN is not enabled.
Please reinstall chainer after you install cudnn
(see https://github.com/pfnet/chainer#installation).
'cuDNN is not enabled.\n'
epoch 0
(epoch 0) batch 0/82783... training loss is...235437792.0
(epoch 0) batch 1/82783... training loss is...329452736.0
(epoch 0) batch 2/82783... training loss is...154218272.0
(epoch 0) batch 3/82783... training loss is...63537960.0
(epoch 0) batch 4/82783... training loss is...58619268.0
(epoch 0) batch 5/82783... training loss is...55183628.0
(epoch 0) batch 6/82783... training loss is...54082024.0

is operate OK?
thanks

readme request

what does the parameters mean?

lambda_tv,lambda_feat,lambda_style, lr...etc

how do they influence the outcome of trained images?

I do not understand the Train part, need more explanation

I got the VGG model, and process it.

But I don't get the part to Train. Is this the part where I point into STYLE and the VGG, and then it will output the new pretrained model?

Train

Need to train one image transformation network model per one style target. According to the paper, the models are trained on the Microsoft COCO dataset.

python train.py -s <style_image_path> -d <training_dataset_path> -g 0

python3.5 train.py not work

Thanks for FAST neural style engine.
I used python3.5
$) train.py -s sample_images/style_0.jpg -d models/ -g 0
=>
FileNotFoundError: [Errno 2] No such file or directory: 'vgg16.model'

I got VGG_ILSVRC_16_layers.caffemodel
but, train.py need vgg16.model.
How can I solve this problem?

Freezing/Non responsive during training

Hey there.

I don't get the epoch indication anymore, it just freezes. Could this be due to the amount of memory I'm attempting to use? I'm doing --image_size 512 and 3 epochs on a very large image with 4GB VRAM from an amazon instance. I have a GTX 1080 but I have problems with booting Ubuntu, though I may have to try it again I guess if this is the result of not enough VRAM.

capture

When the freeze happens, CTRL+C does not work to exit the process.

EDIT: Also I'm using test2015/ and not train2014/, which I guess might cause this so I'll go ahead and download train2014 in case that's what it is. I just thought newer=better but now I'm realizing they're actually for different uses.

train.py --resume', '-r' ? Cant find a training Example anywhere google

The reason i ask is because im going to crunch the COCO dataset and its going to take 44hours, if there is a resume feature, it would be nice to resume post-work thats been trained. Thx

P.S any example of usage would be great
My Current cmd Setup
python train.py -s style.jpg -d train2014/ -g 0

Illegal memory access CUDA

If I remove --image_size 512 then I no longer get this error. Running on a Titan X (12 Gb of video memory).

python train.py --checkpoint 2000 --style_image styles/illusion.jpg --batchsize 4 --output illusion --gpu 0 --dataset datasets/mscoco/train2014 --image_size 512
num traning images: 82783
20695 iterations, 2 epochs
/home/jamis/.local/lib/python2.7/site-packages/chainer/cuda.py:87: UserWarning: cuDNN is not enabled.
Please reinstall chainer after you install cudnn
(see https://github.com/pfnet/chainer#installation).
  'cuDNN is not enabled.\n'
Traceback (most recent call last):
  File "train.py", line 110, in <module>
    feature_s = vgg(Variable(style_b, volatile=True))
  File "/home/jamis/src/chainer-fast-neuralstyle/net.py", line 97, in __call__
    h = F.max_pooling_2d(y1, 2, stride=2)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/functions/pooling/max_pooling_2d.py", line 173, in max_pooling_2d
    return MaxPooling2D(ksize, stride, pad, cover_all, use_cudnn)(x)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/function.py", line 130, in __call__
    outputs = self.forward(in_data)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/function.py", line 234, in forward
    return self.forward_gpu(inputs)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/functions/pooling/max_pooling_2d.py", line 77, in forward_gpu
    y, self.indexes)
  File "cupy/core/elementwise.pxi", line 545, in cupy.core.core.ElementwiseKernel.__call__ (cupy/core/core.cpp:35252)
  File "cupy/util.pyx", line 36, in cupy.util.memoize.decorator.ret (cupy/util.cpp:1264)
  File "cupy/core/elementwise.pxi", line 405, in cupy.core.core._get_elementwise_kernel (cupy/core/core.cpp:33728)
  File "cupy/core/elementwise.pxi", line 12, in cupy.core.core._get_simple_elementwise_kernel (cupy/core/core.cpp:27106)
  File "cupy/core/elementwise.pxi", line 32, in cupy.core.core._get_simple_elementwise_kernel (cupy/core/core.cpp:26928)
  File "cupy/core/carray.pxi", line 87, in cupy.core.core.compile_with_cache (cupy/core/core.cpp:26615)
  File "/home/jamis/.local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 138, in compile_with_cache
    mod.load(cubin)
  File "cupy/cuda/function.pyx", line 156, in cupy.cuda.function.Module.load (cupy/cuda/function.cpp:3892)
  File "cupy/cuda/function.pyx", line 157, in cupy.cuda.function.Module.load (cupy/cuda/function.cpp:3840)
  File "cupy/cuda/driver.pyx", line 77, in cupy.cuda.driver.moduleLoadData (cupy/cuda/driver.cpp:1466)
  File "cupy/cuda/driver.pyx", line 59, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1202)
cupy.cuda.driver.CUDADriverError: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered

It says that I don't have cuDNN installed but I do:

ldconfig -p | grep cudnn
        libcudnn.so.5 (libc6,x86-64) => /usr/local/cuda/lib64/libcudnn.so.5
        libcudnn.so.4 (libc6,x86-64) => /usr/local/cuda/lib64/libcudnn.so.4

other VGG models, custom configurations

Is it possible to choose other vgg models? For example 19 layer model as mentioned in original paper of "neural artistic style"?

Is it possible to config additional information while training like conv layers for content and style images to use while and so on?

training models?

screen shot 2016-08-27 at 2 22 13 pm

Hi, I've tried to train a new model but writes out multiple model files and also .state extension?
eagle sketch was the name of the style image and i trained it with a folder of various images.

any ideas?Thanks heaps.
output
when I process an image with any of these various.model files it looks like this.

[Updates][Feature requests] 6o6o and ttoinou : cropping, resampling, interruptions, folder, padding, video

6o6o forked the project with new feature for good image cropping when training, and better resampling : https://github.com/6o6o/chainer-fast-neuralstyle .
I just forked this fork and added feature for saving model when interruption in train.py, and generate images for a whole folder instead of just one image in generate.py : https://github.com/ttoinou/chainer-fast-neuralstyle .

I think I'll go add some padding (expand-generate-crop) feature and another feature that divides the image in multiple blocks for low memory GPU (and then merge the results to get the big original image).

If that goes well I'll think about adapting this with https://github.com/manuelruder/artistic-videos for video with subtles changes according to the video flow.

Do you have any feature request ?

NoIssue - Getting started

Hi,

What/where do i get the dataset used for training?
EDIT: Figured it out, downloading COCO dataset :-)

Image result size

Hi, I'm fairly new to these neuralstyle algorithms but is it possible to redraw the result at a higher resolution than the input image?

how to adjust different layer in training

Hi, after looking through the code, I can't understand how to set different layer. For example, I want to have:
the following layer setup, how can I change the code?
('-content_layers', 'relu2_2, relu3_2', 'layers for content')
('-style_layers', 'relu1_1,relu2_1,relu3_1,relu4_1', 'layers for style')

New error suddenly...

When I try to run the implementation, I get an error. Here's traceback:

Traceback (most recent call last):
File "generate.py", line 19, in <module>
serializers.load_npz(args.model, model)
File "/usr/local/lib/python2.7/dist-packages/chainer/serializers/npz.py", line 119, in load_npz
with numpy.load(filename) as f:
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 416, in load
"Failed to interpret file %s as a pickle" % repr(file))
IOError: Failed to interpret file 'models/cubist.model' as a pickle

I've changed nothing, not sure why it would think the models are no good.

CUDA memory issues

at default parameters, CUDA runs out of memory for me (see below stack trace).

if i change batch size to 1, it works, but I am afraid such a small batch size may not produce good results.

at the moment i don't have CUDNN installed. does this use CUDNN?

Genes-MacBook-Pro:chainer-fast-neuralstyle gene$ python train.py -s /Users/gene/Learn/style-transfer/interp/PicassoPeriods/data/cubist.jpg -d /Users/gene/Downloads/train2014 -g 0
num traning images: 82783
20695 iterations, 2 epochs
epoch 0
iter 0
Traceback (most recent call last):
File "train.py", line 106, in
feature_hat = vgg(y)
File "/Users/gene/Learn/chainer-fast-neuralstyle/net.py", line 100, in call
y3 = F.relu(self.conv3_3(F.relu(self.conv3_2(F.relu(self.conv3_1(h))))))
File "/usr/local/lib/python2.7/site-packages/chainer/links/connection/convolution_2d.py", line 77, in call
x, self.W, self.b, self.stride, self.pad, self.use_cudnn)
File "/usr/local/lib/python2.7/site-packages/chainer/functions/connection/convolution_2d.py", line 298, in convolution_2d
return func(x, W, b)
File "/usr/local/lib/python2.7/site-packages/chainer/function.py", line 123, in call
outputs = self.forward(in_data)
File "/usr/local/lib/python2.7/site-packages/chainer/function.py", line 227, in forward
return self.forward_gpu(inputs)
File "/usr/local/lib/python2.7/site-packages/chainer/functions/connection/convolution_2d.py", line 77, in forward_gpu
y = cuda.cupy.empty((n, out_c, out_h, out_w), dtype=x.dtype)
File "/usr/local/lib/python2.7/site-packages/cupy/creation/basic.py", line 20, in empty
return cupy.ndarray(shape, dtype=dtype)
File "cupy/core/core.pyx", line 87, in cupy.core.core.ndarray.init (cupy/core/core.cpp:4930)
File "cupy/cuda/memory.pyx", line 275, in cupy.cuda.memory.alloc (cupy/cuda/memory.cpp:5497)
File "cupy/cuda/memory.pyx", line 414, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8058)
File "cupy/cuda/memory.pyx", line 430, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:7984)
File "cupy/cuda/memory.pyx", line 337, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6952)
File "cupy/cuda/memory.pyx", line 357, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6779)
File "cupy/cuda/memory.pyx", line 255, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5439)
File "cupy/cuda/memory.pyx", line 256, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5360)
File "cupy/cuda/memory.pyx", line 31, in cupy.cuda.memory.Memory.init (cupy/cuda/memory.cpp:1534)
File "cupy/cuda/runtime.pyx", line 180, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:2950)
File "cupy/cuda/runtime.pyx", line 110, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:1865)
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory

Error when training

I get a error when trying to train:

python train.py -s ~/chainer-fast-neuralstyle/styles/oilp.jpg -d ~/chainer-fast-neuralstyle/dataset/train2014/ --gpu 0

num traning images: 82783
82783 iterations, 2 epochs
epoch 0
Traceback (most recent call last):
File "train.py", line 122, in
x[j] = load_image(imagepaths[i*batchsize + j], image_size)
File "train.py", line 21, in load_image
return xp.asarray(image, dtype=np.float32).transpose(2, 0, 1)
File "/usr/local/lib/python2.7/dist-packages/cupy/creation/from_data.py", line 47, in asarray
return cupy.array(a, dtype=dtype, copy=False)
File "/usr/local/lib/python2.7/dist-packages/cupy/creation/from_data.py", line 27, in array
return core.array(obj, dtype, copy, ndmin)
File "cupy/core/core.pyx", line 1400, in cupy.core.core.array (cupy/core/core.cpp:49505)
File "cupy/core/core.pyx", line 1414, in cupy.core.core.array (cupy/core/core.cpp:49121)
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 528, in getattr
raise AttributeError(name)
AttributeError: float

What is the hyper param of demo style

hi @yusuketomoto
your recently update make greate progress.

What is the hyper params of your demo style? is it the default value?
I have train style_0.jpg with default value but didn't re-implement your demo style

any help will be appreciate

Some problem about your re-impletation

GOOD WORK. But I have some Questions to ask you. First, I used your what you have said in the comment to train a new model, but I still found that the results are noisy. Second I found that you just subtracted 120 per image, why did not subtract the mean value that VGG provided?

Lambda settings?

Hey guys does anyone have examples of what lambda_feat, lambda_style, and lambda_tv do? It would be great to actually see what changing these settings does to the model. Maybe one of them helps with the noise?

Mutiple GPUs support?

Thanks for the great code and it works quite well. It only requires less than one second to generate a stylized image.

My question is, does the code support parallel computation on multiple GPUs? It may be even faster when multiple GPUs work parallelly. If so, what's the maximum number of GPUs it can support?

Thanks!

python train.py -s sample_images/style_1.png -d train2014

num traning images: 82783
82783 iterations, 2 epochs
epoch 0
Traceback (most recent call last):
File "train.py", line 124, in
x[j] = load_image(imagepaths[i*batchsize + j], image_size)
File "train.py", line 23, in load_image
return xp.asarray(image, dtype=np.float64).transpose(2, 0, 1)
File "/usr/lib64/python2.7/site-packages/numpy/core/numeric.py", line 482, in asarray
return array(a, dtype, copy=False, order=order)
File "/usr/lib64/python2.7/site-packages/PIL/Image.py", line 514, in getattr
raise AttributeError(name)
AttributeError: float

What affects the speed of training?

Might be a stupid question but here it goes. I upgraded to Titan X Pascal and for instance the speed of texture_nets training increased a lot. The speed of chainer-fast-neuralstyle did go up a little for 256 image_size but when I try with 512 it slows down dramatically. I have plenty of memory left with 512 but there seems to be something I don't understand how the GPU works. Where's the bottleneck?

Generating images takes 15 minutes

On AWS machine (g2.2xlarge), I am trying to generate an image based on the model.
The command I execute is:

Testing - Takes around 15 minutes
python generate.py sample_images/tubingen.jpg -m models/composition.model -o sample_images/output.jpg

I haven't used MS-COCO anywhere as I am using the existing model.
From what I read in README, this should take ~1 second, however it takes 15 minutes.

Here are the steps that I followed (nothing new, just followed the readme):

  1. Git Clone
  2. Setup model "sh setup_model.sh"
  3. Generate image.

Adjust parameters in generate.py as opposed to train.py

Is there a way to use a varying style weight (and other parameters) as part of the generate.py script? It takes so long to train the models that it's hard to experiment with different parameters. If we could adjust weights after the model has already been trained we could experiment much faster. Might there be a way to do this?

[No issue]

Can you provide me link to generated vgg16 model file?
I have some trouble to generate it.

How to do real time processing

Dear yusuketomoto:
how can process real time video?

Machine configuration:
OS............: ubuntu14.04
Memory........: 32G
CPU...........: Intel Core i7-6700K 4.00GHz*8
DisplayCard...: NVIDIA GeForce GTX 1080
Harddisk......: 256G ssd


gtx1080@suker:~$ nvidia-smi
Tue Aug 23 21:17:24 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35 Driver Version: 367.35 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 0000:01:00.0 On | N/A |
| 34% 35C P8 10W / 200W | 1306MiB / 8112MiB | 5% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1169 G /usr/bin/X 1014MiB |
| 0 2025 G compiz 59MiB |
| 0 3571 G unity-control-center 2MiB |
| 0 10412 G ...ves-passed-by-fd --v8-snapshot-passed-by- 227MiB |
+-----------------------------------------------------------------------------+
gtx1080@suker:~$

I am looking forward to your reply.
Thanks.

removing the 'dots'

Has anyone had any luck removing the dots from the final image? I have found that this method can produce some very cool results:
karya-oil_04_1_51000

But the results are really kind of unusable with these dot like artifacts everywhere. Does anyone know if there is a way to train a style without this "dithering"?

real world application idea

Hi guys, just start to play this fast neural style recently, it's amazing to get it work on my laptop with geforce 940M 2G GPU, although it's slow. I am thinking to turn this algorithm into real word application. Besides app like prisma, it seem we can easily create animation movie in the future with such technology. Any other thought about building real world product?

BTW, have fun with modeling and generating.

Chainer GPU

I am using Windows 7, Anaconda, whenever I install CUPY and try to run Chainer with GPU, it crashes. Any step by step tutorial on how to ensure GPU works? CPU is waaay... too slow. 1500 photos take whole day.

About the result

First, awesome work, seem the first re-implementation of the paper.

Currently, the result seem a little worse than the vanilla neural style or the result in "perceptual loss" paper. maybe the low tv weights result in the noisy and grid like results. Are all the hyperparameter the same as paper's ?

Cuda BLAS error on generate.py with custom model

I trained my own model called models/beach.model but I cannot use it thus far:

$ python generate.py content/condon-01.jpg -m models/beach.model -o output/condon-01-illusion.jpg
/home/jamis/.local/lib/python2.7/site-packages/chainer/cuda.py:87: UserWarning: cuDNN is not enabled.
Please reinstall chainer after you install cudnn
(see https://github.com/pfnet/chainer#installation).
  'cuDNN is not enabled.\n'
Traceback (most recent call last):
  File "generate.py", line 30, in <module>
    y = model(x)
  File "/home/jamis/src/chainer-fast-neuralstyle/net.py", line 55, in __call__
    h = self.b1(F.elu(self.c1(x)), test=test)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/links/connection/convolution_2d.py", line 82, in __call__
    x, self.W, self.b, self.stride, self.pad, self.use_cudnn)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/functions/connection/convolution_2d.py", line 316, in convolution_2d
    return func(x, W, b)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/function.py", line 130, in __call__
    outputs = self.forward(in_data)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/function.py", line 234, in forward
    return self.forward_gpu(inputs)
  File "/home/jamis/.local/lib/python2.7/site-packages/chainer/functions/connection/convolution_2d.py", line 137, in forward_gpu
    y_mats[i] = W_mat.dot(col_mats[i])
  File "cupy/core/core.pyx", line 1109, in cupy.core.core.ndarray.dot (cupy/core/core.cpp:22101)
  File "cupy/core/core.pyx", line 1876, in cupy.core.core.dot (cupy/core/core.cpp:55022)
  File "cupy/core/core.pyx", line 1958, in cupy.core.core.tensordot_core (cupy/core/core.cpp:56307)
  File "cupy/cuda/device.pyx", line 20, in cupy.cuda.device.get_cublas_handle (cupy/cuda/device.cpp:1234)
  File "cupy/cuda/device.pyx", line 109, in cupy.cuda.device.Device.cublas_handle.__get__ (cupy/cuda/device.cpp:2443)
  File "cupy/cuda/device.pyx", line 110, in cupy.cuda.device.Device.cublas_handle.__get__ (cupy/cuda/device.cpp:2381)
  File "cupy/cuda/cublas.pyx", line 120, in cupy.cuda.cublas.create (cupy/cuda/cublas.cpp:1351)
  File "cupy/cuda/cublas.pyx", line 110, in cupy.cuda.cublas.check_status (cupy/cuda/cublas.cpp:1236)
cupy.cuda.cublas.CUBLASError: CUBLAS_STATUS_NOT_INITIALIZED

Also, as mentioned in another issue, I am being told that I have not installed cuDNN but I believe I have:

ldconfig -p | grep cudnn
        libcudnn.so.5 (libc6,x86-64) => /usr/local/cuda/lib64/libcudnn.so.5
        libcudnn.so.4 (libc6,x86-64) => /usr/local/cuda/lib64/libcudnn.so.4

I reinstalled chainer multiple times. Do I need to restart or something? That seems a bit over the top: Cuda is already in my PATH and all the other environment vars required.

I am running on a Titan X and have 2 of them in my machine:

$ nvidia-smi
Tue Aug  9 11:48:30 2016
+------------------------------------------------------+
| NVIDIA-SMI 361.42     Driver Version: 361.42         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX TIT...  Off  | 0000:6D:00.0     Off |                  N/A |
| 22%   32C    P8    14W / 250W |     24MiB / 12287MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX TIT...  Off  | 0000:6E:00.0     Off |                  N/A |
| 22%   43C    P8    31W / 250W |    911MiB / 12285MiB |     29%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    1      4207    G   /usr/lib/xorg/Xorg                             332MiB |
|    1      5307    G   compiz                                         461MiB |
|    1     10126    G   ...inFlow --disable-features=DocumentWriteEv    87MiB |
+-----------------------------------------------------------------------------+

Training against COCO dataset

Can someone briefly explain, when I add a new style image why is training against 82000 COCO images required? And how does it generate a model?
Or please could point me to an article where I can get an overview about the algorithm.

During model creation script fails with MemoryError

When I'm running create_chainer_model.py I'm getting MemoryError when loading the model using CaffeFunction.

Did anybody experienced something similar?

C:\Dev\Sandbox\OpenCV\image_art>C:\Python27\python.exe create_chainer_model.py -g -1
load VGG16 caffemodel
Traceback (most recent call last):
File "create_chainer_model.py", line 32, in
ref = CaffeFunction('VGG_ILSVRC_16_layers.caffemodel')
File "C:\Python27\lib\site-packages\chainer\links\caffe\caffe_function.py", line 127, in init
net.MergeFromString(model_file.read())
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1082, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 612, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 612, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 212, in DecodePackedField
value.append(element)
File "C:\Python27\lib\site-packages\google\protobuf\internal\containers.py", line 251, in append
self._values.append(self._type_checker.CheckValue(value))
MemoryError

Style Scaling?

I don't see an option to scale the style, and I don't mind doing the work to implement if it might be possible.

I wonder how difficult would it be, and any suggestions would be greatly appreciated.

Launch on GPU without cudnn?

Hi! Has anyone tried to run the algorithm on GPU without cudnn? I have Tesla C2075 (it doesn't support cudnn due to level 2.0 CUDA compute capability) and Windows Server 2012 (Python 2.7 x64). I was able to install Chainer smoothly (both with cudnn and without), but without cudnn I got the following error: "CUDA_ERROR_INVALID_VALUE: invalid argument", when "im2col_gpu" is called.

Everything works perfectly with cudnn support (tested on GTX 760), but I really would like to use Tesla somehow due to the much greater performance...

Training assistance

python train.py -s <style_image_path> -d <training_dataset_path> -g 0

I don't understand the model training and was wondering if anyone could offer advice? Essentially, I'm unsure what I need in each of these paths in order to build a model that follows a specific style?

For instance, do I only need one style image that I want in the folder? And, what about in the dataset, should this be a group of similar style images, or do I point this at something specific that already exists?

Thanks.

conv layer used for computing the content loss

At first, thanks for your great implementation!

But the original paper used the feature maps of layer relu2_2 to compute the feature/content losses, while in your implementation, you adopted layer relu3_3. So why is this choice made?
I tried to change the code and used relu2_2 for feature loss computation, but the results are really bad.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.