Git Product home page Git Product logo

hitter's Introduction

HittER

Hierarchical Transformers for Knowledge Graph Embeddings

HittER generates embeddings for knowledge graphs and performs link prediction using a hierarchical Transformer model. It will appear in EMNLP 2021 (arXiv version).

Installation

The repo requires python>=3.7, anaconda and a new env is recommended.

conda create -n hitter python=3.7 -y # optional
conda activate hitter # optional
git clone [email protected]:microsoft/HittER.git
cd HittER
pip install -e .

Data

First download the standard benchmark datasets using the commands below. Thanks LibKGE for providing the preprocessing scripts and hosting the data.

cd data
sh download_standard.sh

Training

Configurations for the experiments are in the /config folder.

python -m kge start config/trmeh-fb15k237-best.yaml

The training process uses DataParallel in all visible GPUs by default, which can be overrode by appending --job.device cpu to the command above.

Evaluation

You can evaluate the trained models on dev/test set using the following commands.

python -m kge eval <saved_dir>
python -m kge test <saved_dir>

Pretrained models are also released for reproducibility.

HittER-BERT QA experiments

QA experiment-related data can be downloaded from the release.

git submodule update --init
cd transformers
pip install -e .

Run experiments

> python hitter-bert.py --help

usage: hitter-bert.py [-h] [--dataset {fbqa,webqsp}] [--filtered] [--hitter]
                      [--seed SEED]
                      [exp_name]

positional arguments:
  exp_name              Name of the experiment

optional arguments:
  -h, --help            show this help message and exit
  --dataset {fbqa,webqsp}
                        fbqa or webqsp
  --filtered            Filtered or not
  --hitter              Use pretrained HittER or not
  --seed SEED           Seed number

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Citation

@inproceedings{chen-etal-2021-hitter,
    title = "HittER: Hierarchical Transformers for Knowledge Graph Embeddings",
    author = "Chen, Sanxing and Liu, Xiaodong and Gao, Jianfeng and Jiao, Jian and Zhang, Ruofei and Ji, Yangfeng",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2021",
    publisher = "Association for Computational Linguistics"
}

hitter's People

Contributors

microsoft-github-operations[bot] avatar microsoftopensource avatar sanxing-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hitter's Issues

some questions

hello, I have a few questions for you and I look forward to your answers:
Q1: Your referred neighborhood are edges that connect to the source vertex. Why is this set up?
Q2: why fb15k237 does not use the MLM loss?

Question about position embedder in model

Hey, really appreciate your nice work!
I notice in the trme.py that for the global cls position embedding, the code uses self.type_embeds = nn.Embedding(100, self.dim)(line 33). However, the latter utilisation (line 155) pos = self.type_embeds(torch.arange(0, 3, device=device)) only uses three positions. So why type_embeds uses nn.Embedding(100, self.dim) instead of nn.Embedding(3, self.dim)? Will this make a difference?
Looking forward to your reply! Thanks a lot!

Why does the hitter-bert not function properly

out = self.bert(kg_attentions=kg_attentions,
output_attentions=True,
output_hidden_states=True,
return_dict=True,
labels=masked_labels,
**sent_input,
)
There is not a missing kg _Attentions parameter.

Question about search the params

Hey, really appreciate your nice work!
I noticed there isn't a Python file uploaded regarding search parameters in your work. Do you have any plans to upload it later on.
Looking forward to your reply! Thanks a lot!

Pretrained model

Hi, thank you for your awesome work !
When will the pretrained model be available ? The link currently doesn't work

Unable to reproduce result

Thanks for releasing the code for your work. I am trying to reproduce the numbers for the no context version of the model. Using the config file trmeh-fb15k237-noctx.yaml, I am getting the following metrics

Dev set
MR=167.69, MRR= 0.327, Hits1 = 0.2308, Hits10= 0.520

Test set
MR=170.79, MRR= 0.369, Hits1 = 0.2749, Hits10= 0.5590

However, in Table 3, the reported numbers are MRR=0.373, Hits@10= 0.561 (Dev set)

Kindly explain how to reproduce the results. Thanks!

About fair comparison

Hi, I have some questions about the result comparison between HittER and baselines.

  1. The reported MRR of CoKE on FB15k-237 and WN18RR are 0.475 and 0.361 in original paper, while in HittER's paper, CoKE's results are 0.484 and 0.364, respectively. So I want to know if you have rerun the CoKE code or adjust the predefiend embedding dimension ?
  2. The embedding dimension in CoKE's original implementation is 256, and other baselines like TuckER and SimplE set the dimension to 200. However, the embedding of HittER is 320, so is that fair for performance comparsion, because I cannot identify whether the performance enhancement of HittER comes from advanced model or large embedding size. Did you evaluate HittER with same embedding size or parameter amount to other baselines ?

Hope you can solve this issue, thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.