Git Product home page Git Product logo

vr-baseline's People

Contributors

caiyuanhao1998 avatar linjing7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vr-baseline's Issues

Questions about testing on GoPro dataset

Hello,

I follow your instruction and recollect GoPro dataset, for example: ./GoPro/test/blur/GOPR0384_11_00/000000.png.
I run the test code by "python demo/restoration_video_demo.py ./configs/FGST_deblur_gopro.py FGST.pth ./data/GoPro results
However, an error occurs "No such file or directory: './data/GoPro/00000000.png'."

It seems like the loader does not load the name of the video files, what should I do to test successfully?

About Fig.6

Hello @linjing7 :
Thank you for your excellent work. How did you make the feature map visualization in Figure 6? it's so perfect!

可以应用实时视频的去模糊吗

您好,请教一下,模型可以应用实时视频的去模糊吗,FPS在25之上。如果现模型复杂度过高不行的话,是否可以修改模型参数,来进行尝试,能否给些指导建议?

关于configs下面 visual hook的配置

原始的代码里面没有对 visual_config配置,默认为none

下面是本人找到的一个模板,仅供参考
inpainting下:
visual_config = dict(
type='VisualizationHook',
output_dir='visual',
interval=10000,
res_name_list=[
'gt_img', 'masked_img', 'stage1_fake_res', 'stage1_fake_img',
'stage2_fake_res', 'stage2_fake_img', 'fake_gt_local'
],
)

from NJUST

Pre-trained weights of the GoPro and DVD datasets

Hello,

Do you release the pre-trained weights of the GoPro and DVD datasets?
I want to reproduce your GoPro and DVD results, but you only release the weights called 'FGST.pth', could I reproduce the GoPro and DVD results on 'FGST.pth'?

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

How to prepare the GoPro dataset?

Hi. I found that the initial dataset downloaded from the given link doesn't match the path and indexes you give in VR_Baseline/configs/FGST_deblur_gopro.py.

It raises an error that: FileNotFoundError: [Errno 2] No such file or directory: 'data/GoPro/train/blur/GOPR0868_11_01/00000044.png'.

I wonder what's the specific steps to prepare the data needed? Looking forward to your early reply. Thanks!

关于代码的一些相关咨询

同学你好,很感谢你对你的工作进行开源。
我有一些问题想咨询一下。
我看代码中关于FGSW-MSA这部分,只有当前帧与前后一帧的光流偏移作为输入。因此想问一下你有试过对2r+1,r≥2帧进行实验吗?实验的结果如何。
还有就是在mmedit/model/backbones/sr_backbones/FGST_util.py中125-145行对前后帧进行的retrive key elements请问这个操作的含义是什么?
如果你能回复我一下,将不胜感激。谢谢。

推理结果图片全黑

命令:python demo/restoration_video_demo.py ./configs/FGST_deblur_gopro_test.py ./weights/FGST_gopro.pth ./data/2.mp4 ./output/

结果如下:
000000

RuntimeError: CUDA out of memory.

Hi,linjing. Thank you very much for the open source code. Unfortunately, I'm meeting some problems while running your code. I used the pre-trained model to test on my dataset, with the command python demo/restoration_video_demo.py configs/FGST_deblur_dvd_test.py pretrained_models/FGST/FGST_dvd.pth data/test.mp4 data/, but I got the following Error Message
RuntimeError: CUDA out of memory. Tried to allocate 2.97 GiB (GPU 0; 11.77 GiB total capacity; 8.25 GiB already allocated; 420.50 MiB free; 9.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My GPU is one Nvidia Geforce rtx3080ti 12G. So, can I do this test on an existing GPU?

About S2SVR model

Thanks for your excellent work, I would like to know whether the S2SVR model is provided, after config the environment with your guidance and run the command bash tools/dist_train.sh configs/S2SVR_sr_reds4.py 8

I met a problem :KeyError: "BasicVSR: 'S2SVR is not in the model registry'"

logs are as follows

Traceback (most recent call last):
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
return obj_cls(**args)
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/restorers/basicvsr.py", line 41, in init
pretrained)
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/restorers/basic_restorer.py", line 48, in init
self.generator = build_backbone(generator)
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/builder.py", line 31, in build_backbone
return build(cfg, BACKBONES)
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/builder.py", line 22, in build
return build_from_cfg(cfg, registry, default_args)
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/mmcv/utils/registry.py", line 45, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'S2SVR is not in the model registry'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tools/train.py", line 169, in
main()
File "tools/train.py", line 133, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/builder.py", line 60, in build_model
return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/mnt/sdb/xiaorunyu/xry/video_SR/S2S/VR-Baseline/mmedit/models/builder.py", line 22, in build
return build_from_cfg(cfg, registry, default_args)
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
KeyError: "BasicVSR: 'S2SVR is not in the model registry'"
Killing subprocess 922430
Traceback (most recent call last):
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/zhangyang/envs/anaconda3/envs/video/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zhangyang/envs/anaconda3/envs/video/bin/python', '-u', 'tools/train.py', '--local_rank=0', 'configs/S2SVR_sr_reds4.py', '--seed', '0', '--launcher', 'pytorch']' returned non-zero exit status 1.

If you could give me any instructions? Thank you.

How can optical flow network update?

Hello, thanks for your code and models!

Since nearest sampling is performed in alignment stage, so how can the parameters of optical flow network update?

IndexError: list index out of range

Hello, thank you for your excellent work! When I tried to use the command "python demo/restoration_video_demo.py configs/FGST_deblur_dvd_test.py pretrained_models/FGST_dvd.pth data/test.mp4 res/test.mp4 --max-seq-len=1" to test the pre-trained model, I encountered the following issues. Even after changing the test video, the problem still persists. I would like to know where the problem lies and would appreciate your advice! Thank you very much.
02bcfa7bf5cabdc333595cdf4ef6187

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.