- 🔥 [01.13] Our Scientific Diagram Analysis dataset M-Paper has been available in HuggingFace, containing 447k high-resolution diagram images and corresponding paragraph analysis.
- [10.10] Our paper UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model is accepted by EMNLP 2023.
- [07.10] The demo on ModelScope is avaliable.
- [07.07] We release the technical report and evaluation set. The demo is coming soon.
-
An OCR-free end-to-end multimodal large language model.
-
Applicable to various document-related scenarios.
-
Capable of free-form question-answering and multi-round interaction.
-
Comming soon
- Online Demo on ModelScope.
- Online Demo on HuggingFace.
- Source code.
- Instruction Training Data.
The evaluation dataset DocLLM can be found in ./DocLLM
.
If you found this work useful, consider giving this repository a star and citing our paper as followed:
@misc{ye2023ureader,
title={UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model},
author={Jiabo Ye and Anwen Hu and Haiyang Xu and Qinghao Ye and Ming Yan and Guohai Xu and Chenliang Li and Junfeng Tian and Qi Qian and Ji Zhang and Qin Jin and Liang He and Xin Alex Lin and Fei Huang},
year={2023},
eprint={2310.05126},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{ye2023mplugdocowl,
title={mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding},
author={Jiabo Ye and Anwen Hu and Haiyang Xu and Qinghao Ye and Ming Yan and Yuhao Dan and Chenlin Zhao and Guohai Xu and Chenliang Li and Junfeng Tian and Qian Qi and Ji Zhang and Fei Huang},
year={2023},
eprint={2307.02499},
archivePrefix={arXiv},
primaryClass={cs.CL}
}