Subscribe us: https://groups.google.com/u/2/g/bodymaps
We developed a suite of pre-trained 3D models, named SuPreM, that combined the best of large-scale datasets and per-voxel annotations, showing the transferability across a range of 3D medical imaging tasks.
How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?
Wenxuan Li, Alan Yuille, and Zongwei Zhou*
Johns Hopkins University
International Conference on Learning Representations (ICLR) 2024 (oral; top 1.2%)
paper | code | slides | talk
Transitioning to Fully-Supervised Pre-Training with Large-Scale Radiology ImageNet for Improved AI Transferability in Three-Dimensional Medical Segmentation
Wenxuan Li1, Junfei Xiao1, Jie Liu2, Yucheng Tang3, Alan Yuille1, and Zongwei Zhou1,*
1Johns Hopkins University
2City University of Hong Kong
3NVIDIA
Radiological Society of North America (RSNA) 2023
abstract | code | slides | talk
โ We have maintained a document for Frequently Asked Questions.
โ We have reviewed 3D medical pre-training in Awesome Medical Pre-Training.
The release of AbdomenAtlas 1.0 can be found at https://github.com/MrGiovanni/AbdomenAtlas
AbdomenAtlas 1.1 is an extensive dataset of 9,262 CT volumes with per-voxel annotation of 25 organs and pseudo annotations for seven types of tumors, enabling us to finally perform supervised pre-training of AI models at scale. Based on AbdomenAtlas 1.1, we also provide a suite of pre-trained models comprising several widely recognized AI models.
Prelimianry benchmark showed that supervised pre-training strikes as a preferred choice in terms of performance and efficiency compared with self-supervised pre-training.
We anticipate that the release of large, annotated datasets (AbdomenAtlas 1.1) and the suite of pre-trained models (SuPreM) will bolster collaborative endeavors in establishing Foundation Datasets and Foundation Models for the broader applications of 3D volumetric medical image analysis.
The following is a list of supported model backbones in our collection. Select the appropriate family of backbones and click to expand the table, download a specific backbone and its pre-trained weights (name
and download
), and save the weights into ./pretrained_weights/
. More backbones will be added along time. Please suggest the backbone in this channel if you want us to pre-train it on AbdomenAtlas 1.1 containing 9,262 annotated CT volumes.
Swin UNETR
name | params | pre-trained data | resources | download |
---|---|---|---|---|
Tang et al. | 62.19M | 5050 CT | weights | |
Jose Valanaras et al. | 62.19M | 50000 CT/MRI | weights | |
Universal Model | 62.19M | 2100 CT | weights | |
SuPreM | 62.19M | 2100 CT | ours ๐ | weights |
U-Net
SegResNet
name | params | pre-trained data | resources | download |
---|---|---|---|---|
SuPreM | 470.13M | 2100 CT | ours ๐ | weights |
Examples of fine-tuning our SuPreM on other downstream medical tasks are provided in this repository.
task | dataset | document |
---|---|---|
organ, muscle, vertebrae, cardiac segmentation | TotalSegmentator | README |
This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the McGovern Foundation. The codebase is modified from NVIDIA MONAI. Paper content is covered by patents pending.