Wenrui Li, Zhengyu Ma, Liang-Jian Deng and Xiaopeng Fan.
The code is based on AVCA and tested on Ubuntu 20.04 with torch 1.13.
The version of spikingjelly we used is 0.0.0.0.12. Installing different versions can cause performance differences.
The features and dataset structure could download and placed the same as AVCA.
Here, you can download our trained AVMST models and baselines which are located in pretrain_model.zip
Put the content of pretrain_model.zip
in the runs/
folder.
Here is an example for evaluating AVMST on Vggsound-GZSL using SeLaVi features.
python get_evaluation.py --load_path_stage_A runs/attention_ucf_vggsound_main --load_path_stage_B runs/attention_vggsound_all_main --dataset_name VGGSound --AVMST
We appreciate the code provided by AVCA, which is very helpful to our research.
If you find this work useful, please consider citing:
@inproceedings{Li2023avmst,
author = {Wenrui Li, Zhengyu Ma, Liang-Jian Deng and Xiaopeng Fan},
title = {Modality-Fusion Spiking Transformer Network for Audio-Visual Zero-Shot Learning},
booktitle = {IEEE International Conference on Multimedia and Expo (ICME))},
year = {2023}
}
@inproceedings{mercea2022avca,
author = {Mercea, Otniel-Bogdan and Riesch, Lukas and Koepke, A. Sophia and Akata, Zeynep},
title = {Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and Language},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}