Git Product home page Git Product logo

kge-dura's Introduction

DURA: Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion

This is the code of paper Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion. Zhanqiu Zhang, Jianyu Cai, Jie Wang. NeurIPS 2020. [arXiv] [NeurIPS-Official]

Dependencies

  • Python 3.6+
  • PyTorch 1.0~1.7
  • NumPy 1.17.2+
  • tqdm 4.41.1+

Results

The results of DURA on WN18RR, FB15k-237 and YAGO3-10 are as follows.

Reproduce the Results

1. Preprocess the Datasets

To preprocess the datasets, run the following commands.

cd code
python process_datasets.py

Now, the processed datasets are in the data directory.

2. Reproduce the Results

To reproduce the results of CP, ComplEx and RESCAL with the DURA regularizer on WN18RR, FB15k237 and YAGO3-10, please run the following commands.

#################################### WN18RR ####################################
# CP
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset WN18RR --model CP --rank 2000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 100 --regularizer DURA --reg 1e-1 --max_epochs 200 \
--valid 5 -train -id 0 -save -weight

# ComplEx
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset WN18RR --model ComplEx --rank 2000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 100 --regularizer DURA_W --reg 1e-1 --max_epochs 50 \
--valid 5 -train -id 0 -save -weight

# RESCAL
CUDA_VISIBLE_DEVICES=2 python learn.py --dataset WN18RR --model RESCAL --rank 256 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 1024 --regularizer DURA_RESCAL --reg 1e-1 --max_epochs 200 \
--valid 5 -train -id 0 -save -weight

#################################### FB237 ####################################
# CP
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset FB237 --model CP --rank 2000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 100 --regularizer DURA_W --reg 5e-2 --max_epochs 200 \
--valid 5 -train -id 0 -save

# ComplEx
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset FB237 --model ComplEx --rank 2000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 100 --regularizer DURA_W --reg 5e-2 --max_epochs 200 \
--valid 5 -train -id 0 -save

# RESCAL
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset FB237 --model RESCAL --rank 512 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 512 --regularizer DURA_RESCAL --reg 5e-2 --max_epochs 200 \
--valid 5 -train -id 0 -save


#################################### YAGO3-10 ####################################
# CP
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset YAGO3-10 --model CP --rank 1000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 1000 --regularizer DURA_W --reg 5e-3 --max_epochs 200 \
--valid 5 -train -id 0 -save -weight

# ComplEx
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset YAGO3-10 --model ComplEx --rank 1000 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 1000 --regularizer DURA_W --reg 5e-3 --max_epochs 200 \
--valid 5 -train -id 0 -save

# RESCAL
CUDA_VISIBLE_DEVICES=0 python learn.py --dataset YAGO3-10 --model RESCAL --rank 512 --optimizer Adagrad \
--learning_rate 1e-1 --batch_size 1024 --regularizer DURA_RESCAL_W --reg 5e-3 --max_epochs 200 \
--valid 5 -train -id 0 -save -weight

Citation

If you find this code useful, please consider citing the following paper.

@inproceedings{NEURIPS2020_f6185f0e,
 author = {Zhang, Zhanqiu and Cai, Jianyu and Wang, Jie},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {21604--21615},
 publisher = {Curran Associates, Inc.},
 title = {Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion},
 url = {https://proceedings.neurips.cc/paper/2020/file/f6185f0ef02dcaec414a3171cd01c697-Paper.pdf},
 volume = {33},
 year = {2020}
}

Acknowledgement

We refer to the code of kbc. Thanks for their contributions.

Other Repositories

If you are interested in our work, you may find the following paper useful.

Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, Jie Wang. AAAI 2020. [paper] [code]

kge-dura's People

Contributors

jianyucai avatar zhanqiuzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kge-dura's Issues

A question about the regularizer.

Since you mentioned before, h \overline{R} =(h_0r_0 - h_1r_1) + (h_0r_1 + h_1r_0) i, it can be seen as a complex vector where the real part is (h_0r_0 - h_1r_1) and the imaginary part is (h_0r_1 + h_1r_0), and the Euation (6) is the regularizer.

But why in your code the ||h \overline{R}||_2^2 = h^2 * r^2 = (lhs[0] ^ 2 + lhs[1] ^ 2) * (rel[0] ^ 2 + rel[1] ^ 2) = (h_0 ^2 + h_1 ^ 2) * (r_0 ^ 2 + r_1 ^ 2)

Since h \overline{R} is a new complex vector, where the real part is (h_0r_0 - h_1r_1) and the imaginary part is (h_0r_1 + h_1r_0), are you sure that its L2 norm's square ||h \overline{R}||_2^2 = (h_0 ^2 + h_1 ^ 2) * (r_0 ^ 2 + r_1 ^ 2)= ||h||_2^2 * ||r||_2^2 ?

Did I misunderstand anything about the L2 norm of a complex vector?

Some question about the Equation 2

Hi, thanks for sharing the code, I have some questions about Equation 2 in this paper.

  1. In the original ComplEx paper of ICML 2016, the scoring function is Re(<h, r, \overline{t}>). While in Equation 2 of your paper, the scoring function becomes Re(\overline{h}rt)
  2. In Equation 2, why Re(\overline{h}rt) = Re(<h\overline{r}, t>)? I think Re(<h\overline{r}, t>) is not equal to the ComplEx' function Re(<h, r, \overline{t}>).

Looking forward to your response.

模型无法收敛

您好,我在使用当前代码训练时模型无法收敛,模型性能始终为0。
请问这可能是什么情况呢?

环境:
torch 1.10
python 3.7
RTX 3090
cuda 11.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.