nagadomi / waifu2x Goto Github PK
View Code? Open in Web Editor NEWImage Super-Resolution for Anime-Style Art
Home Page: http://waifu2x.udp.jp/
License: MIT License
Image Super-Resolution for Anime-Style Art
Home Page: http://waifu2x.udp.jp/
License: MIT License
It suddenly does not work anymore, after I have upgraded NVIDIA driver in host and docker...
I have no idea why would this happen...
root@8927a337995c:~/waifu2x# th waifu2x.lua
/usr/local/bin/luajit: /root/waifu2x/lib/LeakyReLU.lua:1: attempt to index global 'nn' (a nil value)
stack traceback:
/root/waifu2x/lib/LeakyReLU.lua:1: in main chunk
[C]: in function 'require'
waifu2x.lua:4: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00404ac0
Is it possible to run this application on system without NVIDIA graphic card?
Greetings!
I'm trying to train my own networks, but it seems like pairwise_transform.lua is crashing on me when I try an image set consisting of images that aren't the same size (some are 256x256 pixels, some are 512x512 pixels, some are 1K and a few are 2K).
I get the following error right after I see "# make validation-set":
/Volumes/TORCH_SDK/usr/bin/luajit: bad argument #2 to '?' (upper bound must be larger than lower bound at /tmp/luarocks_torch-scm-1-2119/torch7/build/TensorMath.c:4546)
stack traceback:
[C]: at 0x01d0f360
[C]: in function 'random'
.../Downloads/waifu_stuff/waifu2x/lib/pairwise_transform.lua:59: in function 'transformer'
...even/Downloads/waifu_stuff/waifu2x/lib/minibatch_adam.lua:31: in function 'minibatch_adam'
train.lua:121: in function 'train'
train.lua:163: in main chunk
[C]: in function 'dofile'
...umes/TORCH_SDK/usr/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x0101baee70
If I make sure that all my input files are the exact same size (ie, 1K), then it seems to work (well, I think it is, it's happily running in the background right now and I can hear my GPU working up a sweat).
Is this a bug? Or is it a feature?
Cheers,
-CMPX
I know I saw this issue on Linux, but I cannot remember what I did to fix it.
This is the issue. If I can get some help debugging, I am new to lua.
$ th waifu2x.lua
/Users/grmrgecko/torch/install/bin/luajit: .../torch/install/share/lua/5.1/nn/SpatialConvolutionMM.lua:73: attempt to index field 'nn' (a nil value)
stack traceback:
.../torch/install/share/lua/5.1/nn/SpatialConvolutionMM.lua:73: in function 'updateOutput'
.../grmrgecko/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
lib/reconstruct.lua:41: in function 'reconstruct_rgb'
lib/reconstruct.lua:153: in function 'image_f'
waifu2x.lua:56: in function 'convert_image'
waifu2x.lua:198: in function 'waifu2x'
waifu2x.lua:203: in main chunk
[C]: in function 'dofile'
...ecko/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x0104588bc0
As is in srcnn.lua
, waifu2x uses a 7 layer CNN with no pooling layer. Yet it is suggested in the original paper that the 3-layer one even works better than a deeper net and has better runtime performance.
Since the training set used here to generate the parameters only consists of 7000 images, maybe a simpler net is better?
I'm writing a clone of Waitfu2x using OpenCL, which would allow us poor windows users to enjoy our waifus without having to use the web site (and without resolution restrictions).
After a long and unfair fight, I got my GTX970 to scale a 667x1000 image to 1334x2000 in about 40 seconds, which is a great improvement over waifu2x-converter-cpp that does th same thing in about 3 minutes.
But your web site does it in about four seconds.
Does that mean there is a large room for improvement for me, or do you run a large cluster of videocards on your site?
Hey, I'd like to suggest dither removal or undithering for waifu2x.
By the way, what prompted me to suggest this was the discovery that earlier versions of waifu2xcaffee, and I presume waifu2x were quite good at removing the dither from iDOLM@STER Cinderella Girls Starlight Stage cards using noise removal, but no so much anymore:
Original image:
2x Upscale w/ Level 2(High) Noise Removal using waifu2xcaffe 1.0.5:
2x Upscale w/ Level 2 Noise Removal using latest waifu2x (waifu2x caffe 1.0.7 gives similar results)
Actually, one interesting thing to note is that Level 2 noise removal actually removes less dither now than Level 1:
Is it possible to upscale by a custom amount?
Instead of only 1.6x or 2x?
Like 2.25x..
or 1.5x, or whatever we need?
When selecting the high option in noise reduction the following is sent back via the web api.
Output after link due to GitHub field length restriction.
http://pastebin.com/GhLZYGzP
I uploaded Death Note EP 01 as a test at 960p on nyaa.
And they deleted it a few min later. Stating that upscales are pointless and waifu2x is shit.
Here is the chat:
http://s7.postimg.org/9shb2sjbf/Nyaa_Mod.jpg
And here is the Episode in case you want to watch it:
[deleted by admin]
Pretty easy to reproduce: Go to http://waifu2x.udp.jp/ and try upscaling any image larger than 12801280 (eg http://i.imgur.com/6aw75KB.png 12801281).
When trying to process semi-transparent pngs, a white background is added.
I have a project where I need to upscale images, and waifu2x is the only algorithm that looks good enough, but unfortunately, I can't use it for semi-transparent image :(
I've installed Torch7 with torch/distro, which also installed lots of luarocks including cutorch
, cunn
, and graphicsmagick
. The compilation and installation went well with proper dependencies. The installation directory is /home/vbchunguk/torch/install/
.
With previous version(the cuDNN version), it exited with error because I don't have cuDNN-compatible GPU. Now waifu2x doesn't need cuDNN anymore, so I tried to run waifu2x again, with hope. But the error occured:
$ primusrun th waifu2x.lua
/home/vbchunguk/torch/install/bin/luajit: .../vbchunguk/torch/install/share/lua/5.1/nn/Sequential.lua:29: bad argument #1 (field padW does not exist)
stack traceback:
[C]: in function 'updateOutput'
.../vbchunguk/torch/install/share/lua/5.1/nn/Sequential.lua:29: in function 'forward'
/home/vbchunguk/waifu2x/lib/reconstruct.lua:19: in function 'reconstruct_layer'
/home/vbchunguk/waifu2x/lib/reconstruct.lua:45: in function 'image'
waifu2x.lua:36: in function 'convert_image'
waifu2x.lua:116: in function 'waifu2x'
waifu2x.lua:121: in main chunk
[C]: in function 'dofile'
...nguk/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405be0
I'm using GeForce GT 520M(CUDA compute capability 2.1) on laptop, and I used primusrun
because the laptop is using Optimus. Is there a problem with not using Ubuntu?
Called 2x and says it does 2x upsampling. But it's not.. It's 4x... It doubles both the horizontal and the vertical. giving you 4 times the original res.
If you feed it a 500x500 image (250,000 pixels)
and set it to 2x
then you will get a 1000x1000 (1,000,000 pixels) image in return.
4 times the resolution of the original image. Should be called waifu4x really
When following the video conversion procedure (substituting avconc for ffmpeg) the frames are out of order.
The extracted frames in the "frames" folder are right.
The frames in the "new-frames" folder are out of order so when the video is rebuilt, it is out of order.
After following the installation instructions on Ubuntu 15.04, when I try to run, I get the following error:
$ th waifu2x.lua
/usr/local/bin/luajit: /usr/local/share/lua/5.1/trepl/init.lua:363: cuda runtime error (30) : unknown error at /tmp/luarocks_cutorch-scm-1-9445/cutorch/lib/THC/THCGeneral.c:16
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/trepl/init.lua:363: in function 'require'
/data/repos/lua/waifu2x/lib/portable.lua:2: in main chunk
[C]: in function 'require'
waifu2x.lua:1: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00404270
I tried the solution for the similar-looking problem #3
$ echo $LD_LIBRARY_PATH
/usr/local/lib
But still get the same error
[web.lua] Error in RequestHandler, thread: 0x402779a8 is dead.
▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼▼
...ntu/torch/install/share/lua/5.1/graphicsmagick/Image.lua:432: missing declaration for symbol 'SincFastFilter'
stack traceback:
.../ubuntu/torch/install/share/lua/5.1/turbo/httpserver.lua:278: in function <.../ubuntu/torch/install/share/lua/5.1/turbo/httpserver.lua:255>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/turbo/iostream.lua:553: in function </home/ubuntu/torch/install/share/lua/5.1/turbo/iostream.lua:544>
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/turbo/ioloop.lua:568: in function </home/ubuntu/torch/install/share/lua/5.1/turbo/ioloop.lua:567>
▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲
This appears when I either try to download or view the file. However, only happens when I choose to upscale it.
I installed the dependencies and clones the repository to my home folder and ran the following:
th ~/waifu2x/waifu2x.lua
but recieved the following error:
/usr/local/bin/luajit: /home/expenses/waifu2x/lib/image_loader.lua:77: images/miku_small.png: failed to load image
stack traceback:
[C]: in function 'error'
/home/expenses/waifu2x/lib/image_loader.lua:77: in function 'load_float'
/home/expenses/waifu2x/waifu2x.lua:14: in function 'convert_image'
/home/expenses/waifu2x/waifu2x.lua:116: in function 'waifu2x'
/home/expenses/waifu2x/waifu2x.lua:121: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405610
Was there a specific location I was meant to clone the repo to? I felt that the read-me was pretty clear up to that point.
Noise training throws an error. Seems to be line 113 in lib/pairwise_transform.lua:
113. x:samplingFactors({1.0, 1.0, 1.0})
Currently found no use for this line - commenting it out allowed for a successful train.
I'm not quite fond with ubuntu and/or lua, does it work with windows? If yes, how?
Upscaling images that are already large may seem to be a bit wasteful, but I'm sure some people (me) have always wanted to do it with some images... but that, of course, is restricted due to it probably being very resource-intensive.
So I suggest a donate button that then removes some of the restrictions (to an extent) that will allow donors to upscale larger images.
Failed to load resource: the server responded with a status of 405 (Method Not Allowed)
Currently I am using Archlinux and now experience hard time finding dependencies.
It would be very nice to provide demo server's image to download.
This is an amazing image rescaler.
How would one install it on typical Linux distros (such as Fedora, openSUSE, Ubuntu, from source, etc.)?
It would be great to have the instructions in the README.md file or on a separate page (but linked from the README).
Since the issue tracker is apparently the official forum...
While it's too prohibiitively expensive to use at load-time in a videogame, I do find it highly useful seeing what it can do with a compressed 512x512 JPEG texture for an anime-styled character (for say a heavily compacted ~30MB release).
Model screenshot of JPEG texture
Same model, but with Waifu2x'd texture
Can't really use it as is, I use AMD @_@ and it would need a plain C port with OpenCL for it to be hooked into Upload32 in tr_image.c for it to really work. It would be too cumbersome to throw every JPEG texture in through the web demo
Waifu2x is an amazing edgy filterscaler, thanks! Ends my habit of GIMP Smart Upscale+(Dilate+Octave+Darkness layer) for sure.
Took me a while to get everything installed and working. Right now I am doing noise training with about 300 images. It is currently on run #53. Just curious if there is a set number of runs it goes through?
When installing the turbo luarocks package I get the following output.
Installing https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/turbo-1.1-5.rockspec...
Using https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/turbo-1.1-5.rockspec... switching to 'build' mode
Cloning into 'turbo'...
remote: Counting objects: 162, done.
remote: Compressing objects: 100% (158/158), done.
remote: Total 162 (delta 15), reused 92 (delta 2), pack-reused 0
Receiving objects: 100% (162/162), 635.76 KiB | 0 bytes/s, done.
Resolving deltas: 100% (15/15), done.
Checking connectivity... done.
Note: checking out 'b4404dce73284f9e0d6d10e5df5bd198dff5a66f'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b new_branch_name
Warning: variable CFLAGS was not passed in build_variables
make -C deps/http-parser library
make[1]: Entering directory '/tmp/luarocks_turbo-1.1-5-7996/turbo/deps/http-parser'
gcc -I. -DHTTP_PARSER_STRICT=0 -Wall -Wextra -Werror -O3 -fPIC -c http_parser.c -o libhttp_parser.o
gcc -shared -Wl,-soname=libhttp_parser.so.2.1 -o libhttp_parser.so.2.1 libhttp_parser.o
make[1]: Leaving directory '/tmp/luarocks_turbo-1.1-5-7996/turbo/deps/http-parser'
gcc -Ideps/http-parser/ -shared -fPIC -O3 -Wall -g deps/http-parser/libhttp_parser.o deps/turbo_ffi_wrap.c -o libtffi_wrap.so -lcrypto -lssl
In file included from deps/turbo_ffi_wrap.c:32:0:
deps/turbo_ffi_wrap.h:33:25: fatal error: openssl/ssl.h: No such file or directory
#include <openssl/ssl.h>
^
compilation terminated.
Makefile:68: recipe for target 'all' failed
make: *** [all] Error 1
Error: Build error: Failed building.
I have checked and openssl is installed.
I'm going to remove (.t7|.json) from git repo(and git history) in next version update.
Sorry, It will break the cloned/forked repository.
Would it even be possible to use OpenCL to perform the heavy lifting behind this scaler? I don't intend to make it anywhere near real time, I just want to be able to run it without an NVidia card, because I also want to see how well it works at handling large images. Namely, scaling already large (1080p wallpaper) to obscenely large (5120x2880 wallpaper).
I have no idea how much memory that would require, but would a 4GB video card be anywhere near sufficient?
I might be missing something here but when I compare the completed file in the cache folder with the one the API lets me download, I find that on average the file size is at least four times larger on the downloaded image. If I open the downloaded image and save it again in GIMP it closely matches the filesize of the completed file in the cache folder.
@nagadomi
Recently I try to reproduce your scale2x work with Caffe. The network is set up as closely with waifu as possible:
128x128 input, 114x114 output, LeakyReLu, MSE loss,
For now, Solver using SGD not Adam in your implement.
Training data (5000 images) generated by waifu code. Batch size 2, train 100,000 iterations. Base learning rate 0.00025, update learning rate using caffe 'inv' method.
Loss gets to 0.0020 training loss (worse than waifu gives 0.00035~0.00028). I am not sure whether the network parameters converged becuase the loss begins to vibrate around 0.0020 from 10,000 iteration
Test image result is not as good as yours. The result is a little bit blurry. As follows
Do you have any ideas about this? Is it that the solver parameter need more fine-tune? Or a better solver method e.g Adam is essential?
Looking forward to your reply. Sorry for that it is not a develop issue ~~
I am trying to train a dataset to get similar results like you did with the new photo model you implemented. I read in previous posts that you have trained it using 5000 high resolution images. Would you mind sharing what dataset you used and what the average size was of the input pictures? Would you expect any significant improvement using a larger dataset with higher resolution images?
So I am thinking to buy a graphics card for this, as I gather it must be a NVida which supports compute capability 3, but even with that requirement there's so many choices of graphics card. Do I need a top-end one? Can you tell me what one you use? How fast or slow is this program with your graphics card?
I don't want to spend a lot of money if I don't need to, but on the other hand if I spend more money what do I get, so I hope you can give some advice as to where the sweet spot is. Also how much RAM on the computer do you suggest?
Do i need to use the train.lua or is convert_data.lua enough? I used convert_data.lua without any errors and trained it with 500 images but when i try the train.lua it errors out. Examples:
th train.lua -method noise -noise_level 1 -test images/miku_noise.png
{
core : 2
batch_size : 2
method : "noise"
validation_crops : 40
noise_level : 1
epoch : 200
data_dir : "./data"
test : "images/miku_noise.png"
model_file : "./models/noise1_model.t7"
scale : 2
image_list : "./data/image_list.txt"
model_dir : "./models"
crop_size : 128
learning_rate : 0.00025
block_offset : 7
images : "./data/images.t7"
seed : 11
validation_ratio : 0.1
}
/root/torch/install/bin/luajit: /root/waifu2x-master/lib/image_loader.lua:48: attempt to index local 'fp' (a nil value)
stack traceback:
/root/waifu2x-master/lib/image_loader.lua:48: in function 'load_float'
train.lua:75: in function 'train'
train.lua:139: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x004064d0
th cleanup_model.lua -model models/noise1_model.t7 -oformat ascii
/root/torch/install/bin/luajit: /root/torch/install/share/lua/5.1/torch/File.lua:277: unknown object
stack traceback:
[C]: in function 'error'
/root/torch/install/share/lua/5.1/torch/File.lua:277: in function 'readObject'
/root/torch/install/share/lua/5.1/torch/File.lua:294: in function 'load'
cleanup_model.lua:63: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x004064d0
From previous posts I can see that training is quite resource hungry. Would training on CPU machine work?
Is there a specific version for the Nvidia GPU drivers and CUDA that are required?
I am having a lot of issues on Ubuntu 14.04 with the lastest CUDA 7 and 352.09 installation.
Currently I get: http://pastebin.com/FaCtmFXU
modprobe: ERROR: could not insert 'nvidia_331_uvm': Invalid argument
/usr/local/bin/luajit: /usr/local/share/lua/5.1/trepl/init.lua:363: /usr/local/share/lua/5.1/trepl/init.lua:363: /tmp/luarocks_cutorch-scm-1-7286/cutorch/lib/THC/THCGeneral.c(10) : cuda runtime error (30) : unknown error at /tmp/luarocks_cutorch-scm-1-7286/cutorch/lib/THC/THCGeneral.c:241
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/trepl/init.lua:363: in function 'require'
waifu2x.lua:1: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00404ac0
The http://waifu2x.udp.jp/ site is down.
Successfully installed everything, i can use the command line to convert images but the web gui errors out
[root@waifu2x waifu2x-master]# th web.lua
/root/torch/install/bin/luajit: /root/torch/install/share/lua/5.1/trepl/init.lua:363: /root/torch/install/share/lua/5.1/trepl/init.lua:363: /root/torch/install/share/lua/5.1/trepl/init.lua:363: /usr/local/share/lua/5.1/turbo/epoll_ffi.lua:69: bad argument #1 to 'new' (size of C type is unknown or too large)
stack traceback:
[C]: in function 'error'
/root/torch/install/share/lua/5.1/trepl/init.lua:363: in function 'require'
web.lua:4: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x004064d0
I did this:
tar -xzvf cudnn-6.5-linux-x64-v2.tgz
cd cudnn-6.5-linux-x64-v2
sudo cp lib* /usr/local/cuda/lib64/
sudo cp cudnn.h /usr/local/cuda/include/
But I get this when testing waifu2x:
guy@Guy-Ubuntu-PC:~/Waifu$ th waifu2x.lua/usr/local/share/lua/5.1/cudnn/ffi.lua:376: libcudnn.so: cannot open shared object file: No such file or directory
/usr/local/bin/luajit: /usr/local/share/lua/5.1/trepl/init.lua:363: /usr/local/share/lua/5.1/cudnn/ffi.lua:379: 'libcudnn.so not found in library path.
Please install CuDNN from https://developer.nvidia.com/cuDNN
Then make sure all the files named as libcudnn.so* are placed in your library load path (for example /usr/local/lib , or manually add a path to LD_LIBRARY_PATH)
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/trepl/init.lua:363: in function 'require'
waifu2x.lua:1: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00406260
So, I assume CuDNN is not properly, installed.. I searched everywhere on how to install it, but I got nothing more than what I did..
Can you please tell me how to properly install it?
*This is on Ubuntu 14.04 64Bit
My site works fine using http but not https.
Here is my nginx config for that server.
upstream waifu {
server 192.168.0.100:8812;
}
server {
server_name waifu.domain.tld;
listen 80;
listen 443;
index index.php index.html index.htm;
access_log logs/saren/domain.tld/waifu.log;
error_log logs/saren/domain.tld/waifu.error.log;
ssl on;
ssl_certificate cert/waifu_domain_tld.crt;
ssl_certificate_key cert/waifu_domain_tld.key;
location / {
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://waifu;
}
}
Some related nginx http config.
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !S$
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
I have fiddled with _G.TURBO_SSL
in web.lua, it displays the same error.
Hi, I got the following error, when I run convert_data.lua.
my training image size is 48x128
.../waifu2x/lib/image_loader.lua:54: attempt to index local 'fp' (a nil value)
stack traceback:
.../waifu2x/lib/image_loader.lua:54: in function 'load_byte'
convert_data.lua:23: in function 'load_images'
convert_data.lua:40: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00406670
many thanks
I installed the dependencies (torch and CUDA) following your installation instructions. Now when I type 'th waifu2x.lua' I get the following error message:
/home/user/torch/install/share/lua/5.1/torch/File.lua:262: unknown Torch class <
V>
stack traceback:
[C]: in function 'error'
/home/user/torch/install/share/lua/5.1/torch/File.lua:262: in function 'readObject'
/home/user/torch/install/share/lua/5.1/torch/File.lua:319: in function 'load'
waifu2x.lua:32: in function 'convert_image'
waifu2x.lua:116: in function 'waifu2x'
waifu2x.lua:121: in main chunk
[C]: in function 'dofile'
...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:133: in main chunk
[C]: at 0x00406670
Seems like I am still missing some dependencies.
I haven't done any tests myself, I'm simply reporting based on this post: https://archive.moe/a/thread/125688135/#125692747
I'm guessing the cause is the colorspace used. As far as I can tell from the source waifu2x is using YUV colorspace, which is not perceptually uniform and can cause ringing or shifted brightness during scaling operations.
see http://www.imagemagick.org/Usage/resize/#resize_colorspace and http://www.imagemagick.org/discourse-server/viewtopic.php?p=89751#p89751 for possible problems with non-uniform color spaces.
Using LUV or Lab colorspaces instead might yield better results, although you might have to retrain your models if you change that.
Edit: fixed terminology.
This is probably intentional, but the images output by waifu2x have 16 bits per channel, which adds considerable size. Is there an an option to lower the output bit-depth?
I noticed the web.lua crashed after 10 hours of running, with this message:
/root/torch/install/bin/luajit: /usr/local/share/lua/5.1/turbo/ioloop.lua:518: attempt to index local 'handler' (a nil value)
stack traceback:
/usr/local/share/lua/5.1/turbo/ioloop.lua:518: in function '_run_handler'
/usr/local/share/lua/5.1/turbo/ioloop.lua:436: in function '_event_poll'
/usr/local/share/lua/5.1/turbo/ioloop.lua:425: in function 'start'
web.lua:204: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x004064d0
What would happen if I were to train it some more on 2x upscaling?
-Would it reset what it already trained?
-Or would it add improvment to the already trained 2x model?
Also, if I wanted to train it for 2.26218461x, would it work? or does it need to be 2, or 3x, and so on..?
Is it possible to make a libretro/RetroArch version? For this It may be necessary to create a CG/GLSL version?
https://github.com/libretro/RetroArch/wiki
https://github.com/libretro/common-shaders
Currently I am using Arch Linux.
My GPU: GTX 660 Ti
Relevant packages:
I can confirm that I have followed all the steps installing luarocks packages, including Torch.
However, libsnappy-dev
is not present but snappy 1.1.3-1
presents, which should be the same thing. I tried to downgrade snappy
to 1.1.0-2
which is the oldest possible version, but there is no difference.
Any ideas?
1st, thank you for your great work. This is awesome.
I have a question. I am making some interesting #devart using Google neural experiment (https://pbs.twimg.com/media/CVHNNKvUYAAVCxA.jpg). My issue is that I can only produce small images of 512px-600px due to GFX memory limits. The usual process to create the art image is that I use a group of source images and apply them as style to a destination image. So the number of sources is very small (like 4 to 6) but usually of good quality and resolution (at least 2 - 3 times the resolution of the resulting image).
I wanted to use those to train a model that could be used to scale the small image. My thinking is that since the small picture is made of elements found in the larger resolution sources they should scale quite well from a model learned from them... Even if there is only a few sources. Am I right?
I tried to create a new learning model but I keep getting no resulting models when following the instructions on the site. I use an Amazon EC2 GPU vm... could this be the issue?
Let me know what you thing. While it is running:
th train.lua -method scale -model_dir models/my_model -test ../neural-style/out/beachhousev2.png -style photo
I get:
{
random_overlay_rate : 0
images : "./data/images.t7"
active_cropping_rate : 0.5
batch_size : 32
model_file : "models/my_model/scale2.0x_model.t7"
scale : 2
learning_rate : 0.001
nr_rate : 0.75
random_unsharp_mask_rate : 0
max_size : 256
validation_crops : 80
thread : -1
patches : 16
method : "scale"
noise_level : 1
style : "photo"
gpu : -1
data_dir : "./data"
test : "../neural-style/out/beachhousev2.png"
random_half_rate : 0
validation_rate : 0.05
jpeg_chroma_subsampling_rate : 0
image_list : "./data/image_list.txt"
model_dir : "models/my_model"
active_cropping_tries : 10
crop_size : 46
inner_epoch : 4
color : "rgb"
random_color_noise_rate : 0
backend : "cunn"
seed : 11
epoch : 30
}
# make validation-set
load .. 6
#1
## resampling
[==================================== 6/6 ======================================>] ETA: 0ms | Step: 62ms
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.029155457838594
}
.
.
.
# validation
current: nan, best: 100000
#29
## resampling
[==================================== 6/6 ======================================>] ETA: 0ms | Step: 63ms
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.001971720239251
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019699730134259
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019685453993993
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019674577827876
}
# validation
current: nan, best: 100000
#30
## resampling
[==================================== 6/6 ======================================>] ETA: 0ms | Step: 58ms
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019724147989311
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019706121464777
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019693580947609
}
# validation
current: nan, best: 100000
## update
[==================================== 96/96 ====================================>] ETA: 0ms | Step: 7ms
{
loss : 0.0019682200185748
}
# validation
current: nan, best: 100000
ubuntu@ip-10-144-123-177:~/waifu2x-photo$ ls
The validation always look like:
# validation
current: nan, best: 100000
This look strange somehow. Is this what it should look like?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.