That is implementation of Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots for SBER Robotics Lab
article: Blind2Unblind
@InProceedings{Wang_2022_CVPR,
author = {Wang, Zejin and Liu, Jiazheng and Li, Guoqing and Han, Hua},
title = {Blind2Unblind: Self-Supervised Image Denoising With Visible Blind Spots},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {2027-2036}
}
The original code is placed here: github
The model is built in Python3.8.5, PyTorch 1.7.1 in Ubuntu 22.04 environment.
Please put your training dataset under the path: ./b2u_sber_implemetation/data/train.
Please put your validation dataset under the path: ./b2u_sber_implemetation/data/test.
You can find pre-trained models here: ./b2u_sber_implemetation/pretrained_models
Models were trained on datasets G-209, Crystal_focus_0_dose_180, G-146
# # For more noisy datasets processing use model firstly trained on G-209
./pretrained_models/b2u_first.pth
# Than use model secondly trained on G-209 denoised by first model
./pretrained_models/b2u_second.pth
# # For less noisy images use model trained on Crystal_focus_0_dose_180
./pretrained_models/b2u_crystal_first.pth
- For training your own model please use SBER_train
Please put your test data in the folder: ./b2u_sber_implemetation/test
-
To test model on images maximum size 768x1024 use SBER_test_small_images
-
To test model on large resolution images use SBER_test_large_images
In this jupyter notebook you can set:
- your image proportions,
- crop propotions
- margin value for cropping and concating without visible joints