tencentarc / animesr Goto Github PK
View Code? Open in Web Editor NEWCodes for "AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos"
License: Other
Codes for "AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos"
License: Other
Hello, I followed your codes, but got error randomly in animesr.data.ffmpeg_anime_dataset.FFMPEGAnimeDataset.add_ffmpeg_conpression
. I may get "[h264 @ 0x1d1df20] decode_slice_header error" in terminal, when the program runs to "ffmpeg_video2img.stdin.close()".
In this case, 15 images are passed in to generate video, but decoding can result in 14 or fewer images, causing problems. I don't know why this problem occurs, and what's bothers me most is that it is arisen randomly. Do you have any suggestions or ideas?
I'm looking for good approaches in super video resolution, and I found out your proposal quite interesting. I think that the way you train your model to learn a degration space is really effective.
So, I was wondering if that idea would work for real videos. I have a 4k 60fps video dataset, and I want to know if using would be enough for train a new model.
"If you are looking for portable executable files, you can try our realesr-animevideov3 model which shares the similar technology with AnimeSR."
Could you release the training config of realesr-animevideov3. I'm very interested in the difference between realesr-animevideov3 and the previous models.
Thanks.
What can cause this error when running a train?
load_net = load_net[param_key]
KeyError: 'params'
self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key) load_net = load_net[param_key]
File "/mnt/ai_data/jw_chae/tmp/Super_Resolution/AnimeSR/env/lib/python3.8/site-packages/basicsr/models/base_model.py", line 295, in load_network
Hello! I'm glad to get the AVC dataset. However, I found thousands of pictures which are corrupted in AVC-Train. At first, I thought my PC has something wrong. So, I download the AVC dataset three times. But the same problem occurred every times. Is there bug in AVC-Train dataset?
When I do step 1 of training, where can I modify the settings used for the classic basic operators (blur, noise, etc)?
Does this go in the options yaml file? Or do I need to modify one of the python files?
The multiprocessing function of the code does not initiate unless there is an Nvidia graphics card available. rerouting to CPU it looks like there isn't an algorithm for the CPU. Is this to be expected?
Hi Guys, it's me again, i really started liking your project and was questioning myself if you guys want to keep working/updating it for the future since i have not found any roadmaps or information about that.
Really love it keep it going!
Hi,
Thanks for providing this amazing projects. I tried the learning step 1, which trained 300k iterations with half of the provided AVC-Train datasets. Every setting in the train_animesr_step1_net_BasicOPonly.yml file was kept exactly the same. However, it appears to me the provided pretrained_animesr_step1_net_model.pth achieved much better performance than the model I trained (the 300k iteration is not done yet, I tried net_g_170000.pth). Would you mind shed some lights on the possible mistakes I might took. Thank you.
Hello! Thanks for your great contribution to the Anime restoration domain. AnimeSR is a very interesting paper and the performance is really good. I wonder if there is any weight provided for fine-tuned Real-ESRGAN, BSRGAN, and Real-BasicVSR told in the paper? Thanks!
Hi, Thanks for your wonderful work.
In https://github.com/TencentARC/AnimeSR/blob/main/animesr/data/ffmpeg_anime_dataset.py
line 103: img_bytes = self.file_client.get(img_gt_path, 'gt') and
line 130: ffmpeg.input('pipe:', format='rawvideo', pix_fmt='rgb24', s=f'{width}x{height}',
I have check the code, line 103 will read a image with 'bgr', but in line 130, pix_fmt='rgb24'.
Is this a mistake?
Hello,
MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.
If you are interested in participating, you can add your algorithm following the submission steps:
We would be grateful for your feedback on our work!
Hi,
In your paper, you mention that you fine-tuned BSRGAN, Real-ESRGAN, and RealBasicVSR with your dataset and shared the visual results of these fine-tuned models. Do you plan to share the fine-tuned weights of these models as well for comparison purposes?
Thanks.
Hello, I have read your work and was deeply inspired by it, really great work! Would you please release the code & dataset sooner in the feature?
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.