Git Product home page Git Product logo

genbioel's Introduction

BioGenEL

This repository is for our NAACL 2022 work:

Generative Biomedical Entity Linking via Knowledge Base-Guided Pre-training and Synonyms-Aware Fine-tuning

Citation

If interested, please cite:

@inproceedings{yuan-etal-2022-generative,
    title = "Generative Biomedical Entity Linking via Knowledge Base-Guided Pre-training and Synonyms-Aware Fine-tuning",
    author = "Yuan, Hongyi  and
      Yuan, Zheng  and
      Yu, Sheng",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.296",
    doi = "10.18653/v1/2022.naacl-main.296",
    pages = "4038--4048",
}

Some of our code are modified from facebook's GENRE work.

genbioel's People

Contributors

ganjinzero avatar yuanhy1997 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

genbioel's Issues

Missing files for tfidf_vectorizer

Hey,

I found that the location of the tfidf_vectorizer file is not specified in the kbguided_pretrain when creating raw_ptdata, and it seems that this file is not provided on GitHub. I'm curious about what this file is.

kbguided_pretrain/datagen/generate_raw_ptdata.py

tfidf_vectorizer = ''       
vectorizer = joblib.load(tfidf_vectorizer)

def generate_pair(y, mentions, select_scheme):
    if select_scheme == 'random':
        return random.choice(mentions)
    elif select_scheme == 'sample':
        similarity_estimate = cal_similarity_tfidf(mentions, y, vectorizer)
        print(similarity_estimate.shape) ##
        return np.random.choice(mentions, 1, p = similarity_estimate/np.sum(similarity_estimate))[0]
    elif select_scheme == 'most_sim':
        similarity_estimate = cal_similarity_tfidf(mentions, y, vectorizer)
        return mentions[similarity_estimate.argmax()]
    elif select_scheme == 'least_sim':
        similarity_estimate = cal_similarity_tfidf(mentions, y, vectorizer)
        return mentions[similarity_estimate.argmin()]
    else:
        print('Wrong mention selection scheme input!!!')

same is missing in the data_utils>ncbi>prepare_dataset.py

Looking forward to your reply,
Best,

padding issue in pre-training stage

Hey,

We're trying to reproduce the result and an error occurs, seemingly with padding. We checked the scripts and could not figure it out.
Below is the error message:

RuntimeError: stack expects each tensor to be equal size, but got [1041] at entry 0 and [1024] at entry 1

找文件

哈咯 为啥benchmark文件中,找不到/benchmark/bc5cdr/trie.pkl 这个文件?

missing file for fine-tuning

Hey,

I have finished pre-training and now test the model on fine-tuning tasks.
I followed the instructions, put the preprocessed data in ./src folder and run the create_trie_and_target_kb.py script.
When I execute the train_ask.sh file, it complains a missing file. Below I paste the error message.

Setting no soft prompts!
Traceback (most recent call last):
  File "./train.py", line 508, in <module>
    train(config)
  File "./train.py", line 95, in train
    evaluate = config.evaluation,
  File "/export/home/yan/el/GenBioEL/src/datagen/datageneration_finetune.py", line 119, in prepare_trainer_dataset
    encode_data_to_json(os.path.join(text_path, 'train'), tokenizer)
  File "/export/home/yan/el/GenBioEL/src/datagen/datageneration_finetune.py", line 45, in encode_data_to_json
    with open(fi[1]+'.token.json', 'w') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../benchmark/aap/fold0/train.target.token.json'

Looking forward to your reply,
Best,
Xixi

lack instruction for pre-training step

Hey,

I was trying to reproduce the kg guided pretraining step and there is no instruction.
Could you improve the documentation?

Also, tt seems quite a lot of files are missing:
'''bash
python generate_raw_ptdata.py
FileNotFoundError: [Errno 2] No such file or directory: ''
'''

Could you offer download link or write down what those files are?
For instance, what is the "path = './raw_data'" in the same py file?

Best,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.