OFA-Compress is a unified framework which provides OFA model finetuning, distillation and inference capabilities in Huggingface version, and is committed to promoting the lightweighting of large models.
Awesome job! The Huggingface version OFA looks more concise. Can this framework support multi-task pre-training like this repo?
I see that this code file contains many different tasks. Could you provide more details about pre-training (Such as data preparation and submission script)?