This paper also notes difficulty in training vision transformers at greater depths and proposes two solutions. First it proposes to do per-channel multiplication of the output of the residual block. Second, it proposes to have the patches attend to one another, and only allow the CLS token to attend to the patches in the last few layers.
They also add Talking Heads, noting improvements.
I have been greatly inspired by the work of Dr. Phil 'Lucid' Wang. Please check out his open-source implementations of multiple different transformer architectures and support his work.
import numpy as np
key = jax.random.PRNGKey(0)
img = jax.random.normal(key, (1, 256, 256, 3))
v = CaiT(
image_size = 256,
patch_size = 32,
num_classes = 1000,
dim = 1024,
depth = 12, # depth of transformer for patch to patch attention only
cls_depth = 2, # depth of cross attention of CLS tokens to patch
heads = 16,
mlp_dim = 2048,
dropout = 0.1,
emb_dropout = 0.1,
layer_dropout = 0.05 # randomly dropout 5% of the layers
)
init_rngs = {'params': jax.random.PRNGKey(1),
'dropout': jax.random.PRNGKey(2),
'emb_dropout': jax.random.PRNGKey(3)}
params = v.init(init_rngs, img)
output = v.apply(params, img, rngs=init_rngs)
print(output.shape)
n_params_flax = sum(
jax.tree_leaves(jax.tree_map(lambda x: np.prod(x.shape), params))
)
print(f"Number of parameters in Flax model: {n_params_flax}")
Developer updates can be found on:
@misc{https://doi.org/10.48550/arxiv.2103.17239,
doi = {10.48550/ARXIV.2103.17239},
url = {https://arxiv.org/abs/2103.17239},
author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and Jégou, Hervé},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Going deeper with Image Transformers},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}