Source code of paper "TEA: Time-aware Entity Alignment in Knowledge Graphs", which has been accepted by TheWebConf'2023.
>> conda create --name exp python=3.9
# Install pytorch
>> conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
# Install BERT
>> conda install -c huggingface transformers
# Other tools
>> pip install -U scikit-learn
>> pip install torch-geometric
>> conda install pytorch-sparse -c pyg
Download the original DBP15K and DWY100K datasets from here.
Download the hard setting DBP15K datasets from here.
Run the following scripts to train and evaluate the TEA model on all kinds of information.
# Train
>> python train.py --gpu_id 0 --channel all --dataset DBP15k/ja_en --load_hard_split
# Evaluate
>> python evaluate.py --gpu_id 0 --dataset DBP15k/ja_en --load_hard_split
Channels: {Digital, Literal, Structure, Name, Time, All}
Datasets: {DBP15k/ja_en, DBP15k/fr_en, DBP15k/zh_en, DWY100k/wd_dbp, DWY100k/yg_dbp}
Download the Word2Vec file for initializing the corresponding name/relation/attribute embeddings.
If you find our TEA model and the experimental results useful, please kindly cite the following paper:
@inproceedings{liu2023tea,
author = {Liu, Yu and Hua, Wen and Xin, Kexuan and Hosseini, Saeid and Zhou, Xiaofang},
title = {TEA: Time-Aware Entity Alignment in Knowledge Graphs},
booktitle = {Proceedings of the ACM Web Conference 2023},
series = {WWW'23},
pages = {2591โ2599},
location = {Austin, Texas, USA},
year = {2023},
doi = {10.1145/3543507.3583317}
}
We used the code of these models: RDGCN, DGMC, KEGCN, RREA, Dual-AMN, PSR, AttrE, MultiKE, AttrGNN, TEA-GNN. Thanks for the authors' great contributions!