In this work (Li et al., 2024), we have studied the question of whether LMs and VMs learn similar representations of the world, despite being trained on independent data from independent modalities. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT, and LLaMA-2) and three vision model architectures (ResNet, Segformer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of VMs.
You can clone this repository issuing:
git clone [email protected]:jiaangli/VLCA.git
cd VLCA
git submodule update --init MUSE
1. Create a fresh conda environment and install all dependencies.
conda create -n vlca python=3.11
conda activate vlca
pip install -r requirements.txt
2. Datasets As part of this work, we release the following datasets:
Dataset | Dataset HF Alias |
---|---|
Common Words 79K | jaagli/common-words-79k |
IMAGENET with Unique Labels (coming soon) | jaagli/imagenet-ul |
English CLDI (coming soon) | jaagli/en-cldi |
Check available model configurations in config.py
under MODEL_CONFIGS
, available saving paths of
datasets under DataConfig
, runtime parameters under MuseConfig
, and various experimental types
under ExperimentType
.
Set the corresponding paths in all the files in conf
folder.
Example to sequentially run GPT2 and OPT-125m models on ImageNet-21K dataset:
python main.py \
--multirun \
+model=gpt2,opt-125m \
+dataset=imagenet \
muse.exp_type=BASE
Or only run GPT-2 model on EN-CLDI dataset:
python main.py \
+model=gpt2 \
+dataset=cldi \
muse.exp_type=BASE
LMs converge toward the geometry of visual models as they grow larger.
We also investigate the effects of incorporating text signals during vision pretraining by comparing pure vision models against selected CLIP vision encoders.
If you find our code, data or ideas useful in your research, please consider citing the paper:
@misc{li2024visionlanguagemodelsshare,
title={Do Vision and Language Models Share Concepts? A Vector Space Alignment Study},
author={Jiaang Li and Yova Kementchedjhieva and Constanza Fierro and Anders Søgaard},
year={2024},
eprint={2302.06555},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2302.06555},
}
Our codebase heavily relies on these excellent repositories: