codeslake / refvsr Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"
License: GNU Affero General Public License v3.0
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"
License: GNU Affero General Public License v3.0
How to test my own videos, do I need to train first?
I only have a low res video
Hello there,
I'm a begginer programmer trying to reproduce your result. However, i met some difficulties wich i think comme from the dataset.
here is my branche organisation after extracting the .zip :
RefVSR
├── evaluation
├──install
├──ckpt
├──...
├──DATA_OFFSET
├── source
├──target
├──RealMCVSR.z01
├── ...
├──RealMCVSR.z21
i have finally run this scipt : /scripts_eval/eval_RefVSR_MFID_8K.sh with the following configuration
And here is the error message :
Could you help or give me another direction to resolve my bug ?
your faithfully,
Maxx
Hello,
MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.
If you are interested in participating, you can add your algorithm following the submission steps:
We would be grateful for your feedback on our work!
I have downloaded the pre-trained models as well as dataset from the given links and tried to run the evaluation scripts( I didn't modify any hyperparameters except for the dataset path and log path ). However, there's a large gap between my evaluation results and those in the paper.
So I would like to ask what is the problem and what should I try to get the results in the paper?
Thank you!
Hello, do low resolution images and reference images have to be the same size in inferenceing? If my reference image is not the same size as the low resolution image, how can I inference?
Hi, have you released the iOS app you made to concurrently record videos of three cameras on iPhone 12 Pro Max?
I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/A9dCpjHPfE or add me on WeChat (ID: van-sin) and I will invite you to the OpenMMLab WeChat group.
Here are the OpenMMLab 2.0 repos branches:
OpenMMLab 1.0 branch | OpenMMLab 2.0 branch | |
---|---|---|
MMEngine | 0.x | |
MMCV | 1.x | 2.x |
MMDetection | 0.x 、1.x、2.x | 3.x |
MMAction2 | 0.x | 1.x |
MMClassification | 0.x | 1.x |
MMSegmentation | 0.x | 1.x |
MMDetection3D | 0.x | 1.x |
MMEditing | 0.x | 1.x |
MMPose | 0.x | 1.x |
MMDeploy | 0.x | 1.x |
MMTracking | 0.x | 1.x |
MMOCR | 0.x | 1.x |
MMRazor | 0.x | 1.x |
MMSelfSup | 0.x | 1.x |
MMRotate | 0.x | 1.x |
MMYOLO | 0.x |
Attention: please create a new virtual environment for OpenMMLab 2.0.
Thank you for your great work.
I'm going to reproduce results in Table 2, but I'm confused in some configurations in other models.
Thank you
Do I just need to download a RealMCVSR.zip file, or do I need to download all five files?
Thanks for your great work. From your results in Table 2, it seems that the model using l1 loss (Ours-l1) could outperform the model using the proposed two-stage training strategy (Ours) over 3 dB, and it seems an one-stage training process from your training code.
So,
Why does the model “Ours-l1” perform better than the model “Ours”? It seems that you don't have the groundtruth of real-world HR_UW.
How does one-stage training process works?
I see it was tested under Ubuntu. But is it possible to run on Windows?
Hi, I download the dataset, but they both fail in the middle stage and prompt unknown server error. Could you please check the download link and ensure they are properly downloaded.
First of all, thanks for your great work. Your paper was intereseting, and results were great!
I was trying to use your code, especially datasets.py
and get_patch
method, but faced one problem.
In the train time (cropped)
LR_UW
size: (64, 64)LR_REF_W
size: (128, 128)In the test time
LR_UW
size: (480, 270)LR_REF_W
size: (480, 270)I understand that it is because of the cropping done in get_patch
. For the W
reference images, I found that your code gets twice a larger patch than UW
images. However, my concerns is that why ratio of reference image and LR image is different during train time and test time. More precisely,
I'm using your default config, and flag_HD_in
is false. Thank you :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.