Git Product home page Git Product logo

faster-high-res-neural-inpainting's Introduction

Attention-Based Summarization

This project contains the Abs. neural abstractive summarization system from the paper

 A Neural Attention Model for Abstractive Summarization.
 Alexander M. Rush, Sumit Chopra, Jason Weston.

The release includes code for:

  • Extracting the summarization data set
  • Training the neural summarization model
  • Constructing evaluation sets with ROUGE
  • Tuning extractive features

Setup

To run the system, you will need to have Torch7 installed. You will also need Python 2.7, NLTK, and GNU Parallel to run the data processing scripts. Additionally the code currently requires a CUDA GPU for training and decoding.

Finally the scripts require that you set the $ABS environment variable.

> export ABS=$PWD
> export LUA_PATH="$LUA_PATH;$ABS/?.lua"

Constructing the Data Set

The model is trained to perform title generation from the first line of newspaper articles. Since the system is completely data-driven it requires a large set of aligned input-title pairs for training.

To provide these pairs we use the Annotated Gigaword corpus as our main data set. The corpus is available on LDC, but it requires membership. Once the annotated gigaword is obtained, you can simply run the provided script to extract the data set in text format.

Generating the data

To construct the data set run the following script to produce working_dir/, where `working_dir/' is the path to the directory where you want to store the processed data. The script 'construct_data.sh' makes use of the 'parallel' utility, so please make sure that it is in your path. WARNING: This may take a couple hours to run.

 > ./construct_data.sh agiga/ working_dir/

Format of the data files

The above command builds aligned files of the form split.type.txt where split is train/valid/test and type is title/article.

The output of the script is several aligned plain-text files. Each has one title or article per line.

 > head train.title.txt
 australian current account deficit narrows sharply
 at least two dead in southern philippines blast
 australian stocks close down #.# percent
 envoy urges north korea to restart nuclear disablement
 skorea announces tax cuts to stimulate economy

These files can be used to train the ABS system or be used by other baseline models.

Training the Model

Once the data set has been constructed, we provide a simple script to train the model.

./train_model.sh working_dir/ model.th

The training process consists of two stages. First we convert the text files into generic input-title matrices and then we train a conditional NNLM on this representation.

Once the model has been fully trained (this may require 3-4 days), you can use the test script to produce summaries of any plain text file.w

./test_model.sh working_dir/valid.article.filter.txt model.th length_of_summary

Training options

These scripts utilize the Torch code available in $ABS/summary/

There are two main torch entry points. One for training the model from data matrices and the other for evaluating the model on plain-text.

 > th summary/train.lua -help

 Train a summarization model.

   -articleDir      Directory containing article training matrices. []
   -titleDir        Directory containing title training matrices. []
   -validArticleDir Directory containing article matricess for validation. []
   -validTitleDir   Directory containing title matrices for validation. []
   -auxModel        The encoder model to use. [bow]
   -bowDim          Article embedding size. [50]
   -attenPool       Attention model pooling size. [5]
   -hiddenUnits     Conv net encoder hidden units. [1000]
   -kernelWidth     Conv net encoder kernel width. [5]
   -epochs          Number of epochs to train. [5]
   -miniBatchSize   Size of training minibatch. [64]
   -printEvery      How often to print during training. [1000]
   -modelFilename   File for saving loading/model. []
   -window          Size of NNLM window. [5]
   -embeddingDim    Size of NNLM embeddings. [50]
   -hiddenSize      Size of NNLM hidden layer. [100]
   -learningRate    SGD learning rate. [0.1]

Testing options

The run script is used for beam-search decoding with a trained model. See the paper for a description of the extractive features used at decoding time.

> th summary/run.lua -help

-blockRepeatWords Disallow generating a repeated word. [false]
-allowUNK         Allow generating <unk>. [false]
-fixedLength      Produce exactly -length words. [true]
-lmWeight         Weight for main model. [1]
-beamSize         Size of the beam. [100]
-extractive       Force fully extractive summary. [false]
-lmWeight         Feature weight for the neural model. [1]
-unigramBonus     Feature weight for unigram extraction. [0]
-bigramBonus      Feature weight for bigram extraction. [0]
-trigramBonus     Feature weight for trigram extraction. [0]
-lengthBonus      Feature weight for length. [0]
-unorderBonus     Feature weight for out-of-order extraction. [0]
-modelFilename    Model to test. []
-inputf           Input article files.  []
-nbest            Write out the nbest list in ZMert format. [false]
-length           Maximum length of summary.. [5]

Evaluation Data Sets

We evaluate the ABS model using the shared task from the Document Understanding Conference (DUC).

This release also includes code for interactive with the DUC shared task on headline generation. The scripts for processing and evaluating on this data set are in the DUC/ directory.

The DUC data set is available online, unfortunately you must manually fill out a form to request the data from NIST. Send the request to Angela Ellis.

Processing DUC

After receiving credentials you should obtain a series of tar files containing the data used as part of this shared task.

  1. Make a directory DUC_data/ which should contain the given files

    >DUC2003\_Summarization\_Documents.tgz
    >DUC2004\_Summarization\_Documents.tgz
    >duc2004\_results.tgz
    >detagged.duc2003.abstracts.tar.gz
    
  2. Run the setup script (this requires python and NLTK for tokenization)

    ./DUC/setup.sh DUC_data/

After running the scripts there should be directories

   DUC_data/clean_2003/
   DUC_data/clean_2004/

Each contains a file input.txt where each line is a tokenized first line of an article.

 > head DUC_data/clean_2003/input.txt
 schizophrenia patients whose medication could n't stop the imaginary voices in their heads gained some relief after researchers repeatedly sent a magnetic field into a small area of their brains .
 scientists trying to fathom the mystery of schizophrenia say they have found the strongest evidence to date that the disabling psychiatric disorder is caused by gene abnormalities , according to a researcher at two state universities .
 a yale school of medicine study is expanding upon what scientists know  about the link between schizophrenia and nicotine addiction .
 exploring chaos in a search for order , scientists who study the reality-shattering mental disease schizophrenia are becoming fascinated by the chemical environment of areas of the brain where perception is regulated .

As well as a set of references:

> head DUC_data/clean_2003/references/task1_ref0.txt
Magnetic treatment may ease or lessen occurrence of schizophrenic voices.
Evidence shows schizophrenia caused by gene abnormalities of Chromosome 1.
Researchers examining evidence of link between schizophrenia and nicotine addiction.
Scientists focusing on chemical environment of brain to understand schizophrenia.
Schizophrenia study shows disparity between what's known and what's provided to patients.

System output should be added to the directory system/task1_{name}.txt. For instance the script includes a baseline PREFIX system.

DUC_data/clean_2003/references/task1_prefix.txt

ROUGE for Eval

To evaluate the summaries you will need the ROUGE eval system.

The ROUGE script requires output in a very complex HTML form. To simplify this process we include a script to convert the simple output to one that ROUGE can handle.

Export the ROUGE directory export ROUGE={path_to_rouge} and then run the eval scripts

> ./DUC/eval.sh DUC_data/clean_2003/
FULL LENGTH
   ---------------------------------------------
   prefix ROUGE-1 Average_R: 0.17831 (95%-conf.int. 0.16916 - 0.18736)
   prefix ROUGE-1 Average_P: 0.15445 (95%-conf.int. 0.14683 - 0.16220)
   prefix ROUGE-1 Average_F: 0.16482 (95%-conf.int. 0.15662 - 0.17318)
   ---------------------------------------------
   prefix ROUGE-2 Average_R: 0.04936 (95%-conf.int. 0.04420 - 0.05452)
   prefix ROUGE-2 Average_P: 0.04257 (95%-conf.int. 0.03794 - 0.04710)
   prefix ROUGE-2 Average_F: 0.04550 (95%-conf.int. 0.04060 - 0.05026)

Tuning Feature Weights

For our system ABS+ we additionally tune extractive features on the DUC summarization data. The final features we obtained our distributed with the system as tuning/params.best.txt.

The MERT tuning code itself is located in the tuning/ directory. Our setup uses ZMert for this process.

It should be straightforward to tune the system on any developments summarization data. Take the following steps to run tuning on the DUC-2003 data set described above.

First copy over reference files to the tuning directoy. For instance to tune on DUC-2003:

ln -s DUC_data/clean_2003/references/task1_ref0.txt tuning/ref.0
ln -s DUC_data/clean_2003/references/task1_ref1.txt tuning/ref.1
ln -s DUC_data/clean_2003/references/task1_ref2.txt tuning/ref.2
ln -s DUC_data/clean_2003/references/task1_ref3.txt tuning/ref.3

Next copy the SDecoder template, cp SDecoder_cmd.tpl SDecoder_cmd.py and modify the SDecoder_cmd.py to point to the model and input text.

{"model" : "model.th",
 "src" : "/data/users/sashar/DUC_data/clean_2003/input.txt",
 "title_len" : 14}

Now you should be able to run Z-MERT and let it do its thing.

> cd tuning/; java -cp zmert/lib/zmert.jar ZMERT ZMERT_cfg.txt

When Z-MERT has finished you can run on new data using command:

> python SDecoder_test.py input.txt model.th

faster-high-res-neural-inpainting's People

Contributors

leehomyc avatar merofeev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faster-high-res-neural-inpainting's Issues

Error while running commands

I am trying to run the command :
th run_content_network.lua

And I am getting an error message :

/Users/arqam/torch/install/bin/luajit: /Users/arqam/torch/install/share/lua/5.1/trepl/init.lua:389: module 'cudnn' not found:No LuaRocks module found for cudnn no field package.preload['cudnn'] no file '/Users/arqam/.luarocks/share/lua/5.1/cudnn.lua' no file '/Users/arqam/.luarocks/share/lua/5.1/cudnn/init.lua' no file '/Users/arqam/torch/install/share/lua/5.1/cudnn.lua' no file '/Users/arqam/torch/install/share/lua/5.1/cudnn/init.lua' no file './cudnn.lua' no file '/Users/arqam/torch/install/share/luajit-2.1.0-beta1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn.lua' no file '/usr/local/share/lua/5.1/cudnn/init.lua' no file '/Users/arqam/.luarocks/lib/lua/5.1/cudnn.so' no file '/Users/arqam/torch/install/lib/lua/5.1/cudnn.so' no file './cudnn.so' no file '/usr/local/lib/lua/5.1/cudnn.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'error' /Users/arqam/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require' run_content_network.lua:6: in main chunk [C]: in function 'dofile' ...rqam/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x0108248a10

I did as said in the previous step, so what could be the possible reason for this error?

So if I need to install cudnn the how to do that, and do we need server GPUs to run as when I google I found that.
I don't have that and want to run the code in my macbook so is it possible anyway that I can do that?

Neuron trained with Human face data fails to execute in this code

Hi,
By referring the below link:
#24
I trained train.lua with human face data in context encoder https://github.com/pathak22/context-encoder

The trained model, when tested, executes in context encoder.
However the same trained model fails to execute in Faster-High-Res-Neural-Inpainting source code.
When i run the command: th run_content_network.lua,
I get the below error log:

Found Environment variable CUDNN_PATH = /home/test/cudnn-8.0-linux-x64-v5.0-ga/cuda/lib64/libcudnn.so.5.0.5{
gpu : 1
model_file : "models/inpaintCenterHumanFaceNoRetrain_500_net_G.t7"
overlapPred : 4
fineSize : 128
}
/home/test/torch/install/bin/luajit: /home/test/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.Sequential:
/home/test/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #3 to 'v' (cannot convert 'struct THDoubleTensor *' to 'struct THFloatTensor *')
stack traceback:
[C]: in function 'v'
/home/test/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialConvolutionMM_updateOutput'
...ya/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:79: in function <...ya/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:76>
[C]: in function 'xpcall'
/home/test/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/test/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/test/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/test/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/test/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
run_content_network.lua:41: in main chunk
[C]: in function 'dofile'
...est/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/test/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/test/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
run_content_network.lua:41: in main chunk
[C]: in function 'dofile'
...est/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

I referred the link torch/nn#981
to resolve this issue.
However its not resolved yet.
I was wondering when the model is executing in context encoder, what could be the reason for this error in Faster-High-Res-Neural-Inpainting
Could you help me regarding the same?

Not able to download the content models

Apparently the dropbox account you have as a download server in your bash script is suspended. I get this error when I try to run ./models/download_content_models.sh:

--2017-02-21 12:30:56-- https://www.dropbox.com/s/skco1wdeeq699z1/imagenet_inpaintCenter.t7?dl=0 Resolving www.dropbox.com (www.dropbox.com)... 162.125.1.1 Connecting to www.dropbox.com (www.dropbox.com)|162.125.1.1|:443... connected. HTTP request sent, awaiting response... 429 Too Many Requests 2017-02-21 12:30:56 ERROR 429: Too Many Requests.

wrong when running "th run_content_network.lua"

This is the stack info:
/root/torch/install/bin/luajit: /root/torch/install/share/lua/5.1/trepl/init
lua:389: module 'cudnn' not found:No LuaRocks module found for cudnn
no field package.preload['cudnn']
no file '/root/.luarocks/share/lua/5.1/cudnn.lua'
no file '/root/.luarocks/share/lua/5.1/cudnn/init.lua'
no file '/root/torch/install/share/lua/5.1/cudnn.lua'
no file '/root/torch/install/share/lua/5.1/cudnn/init.lua'
no file './cudnn.lua'
no file '/root/torch/install/share/luajit-2.1.0-beta1/cudnn.lua'
no file '/usr/local/share/lua/5.1/cudnn.lua'
no file '/usr/local/share/lua/5.1/cudnn/init.lua'
no file '/root/.luarocks/lib/lua/5.1/cudnn.so'
no file '/root/torch/install/lib/lua/5.1/cudnn.so'
no file '/root/torch/install/lib/cudnn.so'
no file './cudnn.so'
no file '/usr/local/lib/lua/5.1/cudnn.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/root/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'r
quire'
run_content_network.lua:6: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in ma
n chunk
[C]: at 0x00406670

Bilinear Interpolation

In the paper, you said," Since Context Encoder only works with 128128 images and when
the input is larger, we directly upsample the 128
128 output to 512*512 using bilinear interpolation."
But when I use bilinear interpolation, I find the image blurring.

nvcc fatal : Unsupported gpu architecture 'compute_75'

I have encountered the following problem when i tried to execute luarocks install cudnn

CMake Error at /usr/local/Cellar/cmake/3.13.2/share/cmake/Modules/FindCUDA.cmake:696 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
CMakeLists.txt:7 (FIND_PACKAGE)

The problem was solved by installing cuda from https://developer.nvidia.com
But when i run this command for second time, a new problem has arisen.

nvcc fatal : Unsupported gpu architecture 'compute_75'
nvcc fatal : Unsupported gpu architecture 'compute_75'
CMake Error at THC_generated_THCReduceApplyUtils.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCReduceApplyUtils.cu.o

CMake Error at THC_generated_THCBlas.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCBlas.cu.o

nvcc fatal : Unsupported gpu architecture 'compute_75'
CMake Error at THC_generated_THCSleep.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCSleep.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
nvcc fatal : Unsupported gpu architecture 'compute_75'
make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o] Error 1
CMake Error at THC_generated_THCTensorCopy.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorCopy.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o] Error 1
nvcc fatal : Unsupported gpu architecture 'compute_75'
CMake Error at THC_generated_THCStorageCopy.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCStorageCopy.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o] Error 1
nvcc fatal : Unsupported gpu architecture 'compute_75'
nvcc fatal : Unsupported gpu architecture 'compute_75'
make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o] Error 1
CMake Error at THC_generated_THCStorage.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCStorage.cu.o

CMake Error at THC_generated_THCHalf.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCHalf.cu.o

nvcc fatal : Unsupported gpu architecture 'compute_75'
CMake Error at THC_generated_THCTensor.cu.o.Release.cmake:219 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-2575/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensor.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o] Error 1
make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o] Error 1
make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o] Error 1
make[1]: *** [lib/THC/CMakeFiles/THC.dir/all] Error 2
make: *** [all] Error 2

Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec - Build error: Failed building.

How can i solve this problem?

Issue While Running run_texture_optimization.lua

Getting this error if i ran run_texture_optimization.lua
/torch/install/share/lua/5.1/loadcaffe/ffi.lua:10: bad argument #1 to 'load' (string expected, got nil)
stack traceback:
[C]: in function 'error'
/users/rajat.a/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
run_texture_optimization.lua:70: in main chunk
[C]: in function 'dofile'
...at.a/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

would be helpful if resolved

get exception when run th blend.lua

/root/torch/install/bin/luajit: inconsistent tensor size, expected r_ [4 x 512 x 512], t [4 x 512 x 512] and src [3 x 512 x 512] to have the same number of elements, but got 1048576, 1048576 and 786432 elements respectively at /root/torch/pkg/torch/lib/TH/generic/THTensorMath.c:902
stack traceback:
[C]: at 0x7fa61a805110
[C]: in function 'cmul'
blend.lua:8: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x004064f0

TensorFlow Implementation

Thanks for your excellent work! I wonder if there is any TensorFlow Implementation, or others in Python?

Train content network.

Is there code to train the content network? Or could you point us to how to train the content network?

About the texture network code

Hello! I have read your article and the code. There are some questions i would like to consult you: 1.Does the content loss in the texture network code file (transfer_CNNMRF_wrapper) not mean the content loss of the content network? 2.What exactly way does the patchmatch use to search for the nearest patch? I printed target_feature_map, found that 18 kinds of convolution image were divided into 1933+493=2426,patches totally, and then sampled these patches, the number of sampling patches is equal to the previous each convolution image is divided into, I didn't see the point of distinguishing the hole inside and the hole outside in the code, and is it different from the article? 3. Does the patchmatch look for the closest patch in the image input's feature map of the middle layer at each scale? The input is the image being hollowed out . Does the result of each iteration return as the image input and then continue to look for the closest patch from the image input's feature map of the middle layer ? 4. I print the output of the net at the first scale . It's a convolution image of 5121616. How can change it into an optimal output image x? Does the optimal output image of the first scale x return as the image named fake of and continue to enter the texture network? If this is not the case, then where is the x to be sent to initialization the next scale? 5.I print the net and find it only use ten convolution layers different from the VGG sixteen convolution layers. And it doesn't use fully-connected layers. This is why.
Wish to receive your reply. Thank you.

how to create mask.png?

hi, how can i change the mask location used in blend,lua?
ex: i want to make the square mask to mask the top left of the image instead of the center?
can u help

attempt to call method 'cl'[...]

$ th run_texture_optimization.lua

torch/install/bin/luajit: /e/Ne/High-Res-Neural-Inpainting/mylib/mrf.lua:52: attempt to call method 'cl' (a nil value)

数据集

你好,想问一下,使用的数据集是什么呢,是成对的数据吗(就是有空缺图片跟完整图像一一对应)

How to create the the tensor mask?

Hii, thanks alot for sharing yr work, it s really intresting
I m trying to inpaint other locations in image rather than the centre, but i m stuck in the step of blend as i want to create a tensor mask corresponding to the region i inpianted so can u plz tell me how to create the tensor mask
I appriate yr help, thanks

Recontructed output obtained for a Paris image is not as clear

Hi,

The final reconstructed output for a Paris test image as mentioned in the paper >https://arxiv.org/pdf/1611.09969v2.pdf

is given below
capture

However, the output that we received for the same test image is as below:
result_0016

I used the same models as mentioned : https://drive.google.com/open?id=0BxYj-YwDqh45XzZVTXF1dnJXY28

Could you please let me know the reason for the same?

Also, i tested the source code for couple of images from ImageNet. That was also not reconstructed as expected.
Please find the results as attached here:
result_0026
result_0027
result_0028
result_0024

Kindly let me know your opinion on the same.

Thank you.

Does texture network contain holistic content constraint?

Hi,I have read your new paper,which is clear than the first.
However,through your paper,I found a question.
Does texture network contain holistic content constraint?The content network,which have been trained,contains l2 loss and adversarial loss.But there is an another holistic content constraint in your joint loss function(Equation 1).I don't understand.Is there an extra holistic content constraint in texture network,which is different from the content loss in content network.By the way,does the Equation 1 only belong to the texture network?
Thank you very much.

I have problems with running the code with input images of 256*256.

I have several images whose sizes are 256*256. I don't want to resize it to 512*512 as they
will get blurry. Then I come to realize that the following code is hard, just specially for 512*512.

target_image[{{},{117,232},{117,396}}]:copy(target_image[{{},{1,116},{117,396}}])
target_image[{{},{233,280},{117,396}}]:copy(target_image[{{},{69,116},{117,396}}])
target_image[{{},{281,396},{117,396}}]:copy(target_image[{{},{397,512},{117,396}}])

Would you please give the corresponding numbers for 256*256.

Hole size is 180x180?

Your paper mentioned that the holes for a 512x512 image are 256x256. However, the demo in this repository has hole sizes of 180x180. Is there a version where the holes are 256x256?

You should refreash your citation.

See http://openaccess.thecvf.com/CVPR2017.py

@InProceedings{Yang_2017_CVPR,
author = {Yang, Chao and Lu, Xin and Lin, Zhe and Shechtman, Eli and Wang, Oliver and Li, Hao},
title = {High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
} 

Unknown image x in the paper

Hello, thanks for your work. In Content Network, we will use holistic content loss Ec(h(x, R), f(x0)), but how we get the unknown image x? And what the relationship between x and f(x0), does it have a function about x and f(x0)? What kind of difference between f(x0) and h(x, R)?
Thanks!

wrong of running th run_texture_optimization.lua

thank you for providing your code.
When I run th run_texture_optimization.lua, it shows that it has something wrong of
init.lua: 724: bad argument #2 to "new"((number has no integer representation))

does anyone encounter the same problem? Thanks

Handling Inapinting for Non Center Region

Hey!

Thanks for the code. I was trying to play around with this and had a query. I wanted to know how to inpaint images when the region of interest is not a square at the centre of the image but something which is quite arbitrary. For example, patch inpainting a detected object or a person in any part of image (espcially non center ones). Is there a way to do it here?

Thanks !

why are matrix values being changed here?

Hi, I noticed that in run_content_network.lua, these two lines have been used:

    output[output:gt(1)]=1
    output[output:lt(-1)]=-1

I was wondering what is the reason behind this?

实在看不懂论文里面的x到底是什么...

大佬你好,你们所写论文里面的x到底是什么?怎么迭代实现超分辨率的?网上查找到的资料,与你们发在CVPR上的图不一致,这个迭代该怎么理解?还望不吝赐教!

How to realize local texture loss

I am amazing about the technology about the local texture loss , when I read your paper ,I try to realize the local texture loss with CNNMRF ,but failed. Could you point it in your code? I will read it carefully.
Do you need more GPU-memory for your code to train than context-encoder?

I'm sorry to bother you,but I have some questions

@leehomyc
Hi,

I'm very confused about pretrained network VGG-19 which seems to apply on classfication,how does it work in image inpainting?There are hundreds of excellent networks.Why we just choose this one?the way we use the pretrained model is that we use the pretrained model and put the incomplete picture as model input,croping it to 224,and as a result,getting a coarse result?

I would very appreciate if you could help me,thank you!

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.