Git Product home page Git Product logo

Comments (3)

guody5 avatar guody5 commented on June 17, 2024 1

When running the script run.py in code2nl
and using the model dowloaded from the drive link

I am getting this error:

Missing key(s) in state_dict: "bias", "encoder.embeddings.word_embeddings.weight", "encoder.embeddings.position_embeddings.weight", "encoder.embeddings.token_type_embeddings.weight", "encoder.embeddings.LayerNorm.weight", "encoder.embeddings.LayerNorm.bias", "encoder.encoder.layer.0.attention.self.query.weight", "encoder.encoder.layer.0.attention.self.query.bias", "encoder.encoder.layer.0.attention.self.key.weight", "encoder.encoder.layer.0.attention.self.key.bias", "encoder.encoder.layer.0.attention.self.value.weight", "encoder.encoder.layer.0.attention.self.value.bias", "encoder.encoder.layer.0.attention.output.dense.weight", "encoder.encoder.layer.0.attention.output.dense.bias", "encoder.encoder.layer.0.attention.output.LayerNorm.weight", "encoder.encoder.layer.0.attention.output.LayerNorm.bias", "encoder.encoder.layer.0.intermediate.dense.weight", "encoder.encoder.layer.0.intermediate.dense.bias", "encoder.encoder.layer.0.output.dense.weight", "encoder.encoder.layer.0.output.dense.bias", "encoder.encoder.layer.0.output.LayerNorm.weight", "encoder.encoder.layer.0.output.LayerNorm.bias", "encoder.encoder.layer.1.attention.self.query.weight", "encoder.encoder.layer.1.attention.self.query.bias", "encoder.encoder.layer.1.attention.self.key.weight", "encoder.encoder.layer.1.attention.self.key.bias", "encoder.encoder.layer.1.attention.self.value.weight", "encoder.encoder.layer.1.attention.self.value.bias", "encoder.encoder.layer.1.attention.output.dense.weight", "encoder.encoder.layer.1.attention.output.dense.bias", "encoder.encoder.layer.1.attention.output.LayerNorm.weight", "encoder.encoder.layer.1.attention.output.LayerNorm.bias", "encoder.encoder.layer.1.intermediate.dense.weight", "encoder.encoder.layer.1.intermediate.dense.bias", "encoder.encoder.layer.1.output.dense.weight", "encoder.encoder.layer.1.output.dense.bias", "encoder.encoder.layer.1.output.LayerNorm.weight", "encoder.encoder.layer.1.output.LayerNorm.bias", "encoder.encoder.layer.2.attention.self.query.weight", "encoder.encoder.layer.2.attention.self.query.bias", "encoder.encoder.layer.2.attention.self.key.weight", "encoder.encoder.layer.2.attention.self.key.bias", "encoder.encoder.layer.2.attention.self.value.weight", "encoder.encoder.layer.2.attention.self.value.bias", "encoder.encoder.layer.2.attention.output.dense.weight", "encoder.encoder.layer.2.attention.output.dense.bias", "encoder.encoder.layer.2.attention.output.LayerNorm.weight", "encoder.encoder.layer.2.attention.output.LayerNorm.bias", "encoder.encoder.layer.2.intermediate.dense.weight", "encoder.encoder.layer.2.intermediate.dense.bias", "encoder.encoder.layer.2.output.dense.weight", "encoder.encoder.layer.2.output.dense.bias", "encoder.encoder.layer.2.output.LayerNorm.weight", "encoder.encoder.layer.2.output.LayerNorm.bias", "encoder.encoder.layer.3.attention.self.query.weight", "encoder.encoder.layer.3.attention.self.query.bias", "encoder.encoder.layer.3.attention.self.key.weight", "encoder.encoder.layer.3.attention.self.key.bias", "encoder.encoder.layer.3.attention.self.value.weight", "encoder.encoder.layer.3.attention.self.value.bias", "encoder.encoder.layer.3.attention.output.dense.weight", "encoder.encoder.layer.3.attention.output.dense.bias", "encoder.encoder.layer.3.attention.output.LayerNorm.weight", "encoder.encoder.layer.3.attention.output.LayerNorm.bias", "encoder.encoder.layer.3.intermediate.dense.weight", "encoder.encoder.layer.3.intermediate.dense.bias", "encoder.encoder.layer.3.output.dense.weight", "encoder.encoder.layer.3.output.dense.bias", "encoder.encoder.layer.3.output.LayerNorm.weight", "encoder.encoder.layer.3.output.LayerNorm.bias", "encoder.encoder.layer.4.attention.self.query.weight", "encoder.encoder.layer.4.attention.self.query.bias", "encoder.encoder.layer.4.attention.self.key.weight", "encoder.encoder.layer.4.attention.self.key.bias", "encoder.encoder.layer.4.attention.self.value.weight", "encoder.encoder.layer.4.attention.self.value.bias", "encoder.encoder.layer.4.attention.output.dense.weight", "encoder.encoder.layer.4.attention.output.dense.bias", "encoder.encoder.layer.4.attention.output.LayerNorm.weight", "encoder.encoder.layer.4.attention.output.LayerNorm.bias", "encoder.encoder.layer.4.intermediate.dense.weight", "encoder.encoder.layer.4.intermediate.dense.bias", "encoder.encoder.layer.4.output.dense.weight",

Besides, I suggest you go to our new repo about this task, which is more efficient and needs less GPU resource.
https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text

from codebert.

guody5 avatar guody5 commented on June 17, 2024

When running the script run.py in code2nl
and using the model dowloaded from the drive link

I am getting this error:

Missing key(s) in state_dict: "bias", "encoder.embeddings.word_embeddings.weight", "encoder.embeddings.position_embeddings.weight", "encoder.embeddings.token_type_embeddings.weight", "encoder.embeddings.LayerNorm.weight", "encoder.embeddings.LayerNorm.bias", "encoder.encoder.layer.0.attention.self.query.weight", "encoder.encoder.layer.0.attention.self.query.bias", "encoder.encoder.layer.0.attention.self.key.weight", "encoder.encoder.layer.0.attention.self.key.bias", "encoder.encoder.layer.0.attention.self.value.weight", "encoder.encoder.layer.0.attention.self.value.bias", "encoder.encoder.layer.0.attention.output.dense.weight", "encoder.encoder.layer.0.attention.output.dense.bias", "encoder.encoder.layer.0.attention.output.LayerNorm.weight", "encoder.encoder.layer.0.attention.output.LayerNorm.bias", "encoder.encoder.layer.0.intermediate.dense.weight", "encoder.encoder.layer.0.intermediate.dense.bias", "encoder.encoder.layer.0.output.dense.weight", "encoder.encoder.layer.0.output.dense.bias", "encoder.encoder.layer.0.output.LayerNorm.weight", "encoder.encoder.layer.0.output.LayerNorm.bias", "encoder.encoder.layer.1.attention.self.query.weight", "encoder.encoder.layer.1.attention.self.query.bias", "encoder.encoder.layer.1.attention.self.key.weight", "encoder.encoder.layer.1.attention.self.key.bias", "encoder.encoder.layer.1.attention.self.value.weight", "encoder.encoder.layer.1.attention.self.value.bias", "encoder.encoder.layer.1.attention.output.dense.weight", "encoder.encoder.layer.1.attention.output.dense.bias", "encoder.encoder.layer.1.attention.output.LayerNorm.weight", "encoder.encoder.layer.1.attention.output.LayerNorm.bias", "encoder.encoder.layer.1.intermediate.dense.weight", "encoder.encoder.layer.1.intermediate.dense.bias", "encoder.encoder.layer.1.output.dense.weight", "encoder.encoder.layer.1.output.dense.bias", "encoder.encoder.layer.1.output.LayerNorm.weight", "encoder.encoder.layer.1.output.LayerNorm.bias", "encoder.encoder.layer.2.attention.self.query.weight", "encoder.encoder.layer.2.attention.self.query.bias", "encoder.encoder.layer.2.attention.self.key.weight", "encoder.encoder.layer.2.attention.self.key.bias", "encoder.encoder.layer.2.attention.self.value.weight", "encoder.encoder.layer.2.attention.self.value.bias", "encoder.encoder.layer.2.attention.output.dense.weight", "encoder.encoder.layer.2.attention.output.dense.bias", "encoder.encoder.layer.2.attention.output.LayerNorm.weight", "encoder.encoder.layer.2.attention.output.LayerNorm.bias", "encoder.encoder.layer.2.intermediate.dense.weight", "encoder.encoder.layer.2.intermediate.dense.bias", "encoder.encoder.layer.2.output.dense.weight", "encoder.encoder.layer.2.output.dense.bias", "encoder.encoder.layer.2.output.LayerNorm.weight", "encoder.encoder.layer.2.output.LayerNorm.bias", "encoder.encoder.layer.3.attention.self.query.weight", "encoder.encoder.layer.3.attention.self.query.bias", "encoder.encoder.layer.3.attention.self.key.weight", "encoder.encoder.layer.3.attention.self.key.bias", "encoder.encoder.layer.3.attention.self.value.weight", "encoder.encoder.layer.3.attention.self.value.bias", "encoder.encoder.layer.3.attention.output.dense.weight", "encoder.encoder.layer.3.attention.output.dense.bias", "encoder.encoder.layer.3.attention.output.LayerNorm.weight", "encoder.encoder.layer.3.attention.output.LayerNorm.bias", "encoder.encoder.layer.3.intermediate.dense.weight", "encoder.encoder.layer.3.intermediate.dense.bias", "encoder.encoder.layer.3.output.dense.weight", "encoder.encoder.layer.3.output.dense.bias", "encoder.encoder.layer.3.output.LayerNorm.weight", "encoder.encoder.layer.3.output.LayerNorm.bias", "encoder.encoder.layer.4.attention.self.query.weight", "encoder.encoder.layer.4.attention.self.query.bias", "encoder.encoder.layer.4.attention.self.key.weight", "encoder.encoder.layer.4.attention.self.key.bias", "encoder.encoder.layer.4.attention.self.value.weight", "encoder.encoder.layer.4.attention.self.value.bias", "encoder.encoder.layer.4.attention.output.dense.weight", "encoder.encoder.layer.4.attention.output.dense.bias", "encoder.encoder.layer.4.attention.output.LayerNorm.weight", "encoder.encoder.layer.4.attention.output.LayerNorm.bias", "encoder.encoder.layer.4.intermediate.dense.weight", "encoder.encoder.layer.4.intermediate.dense.bias", "encoder.encoder.layer.4.output.dense.weight",

Hi, you don't need to download the model from the drive link. Just running as the following command, the pre-trained model will be downloaded from huggingface:

lang=php #programming language
lr=5e-5
batch_size=64
beam_size=10
source_length=256
target_length=128
data_dir=../data/code2nl/CodeSearchNet
output_dir=model/$lang
train_file=$data_dir/$lang/train.jsonl
dev_file=$data_dir/$lang/valid.jsonl
eval_steps=1000 #400 for ruby, 600 for javascript, 1000 for others
train_steps=50000 #20000 for ruby, 30000 for javascript, 50000 for others
pretrained_model=microsoft/codebert-base #Roberta: roberta-base

python run.py --do_train --do_eval --model_type roberta --model_name_or_path $pretrained_model --train_filename $train_file --dev_filename $dev_file --output_dir $output_dir --max_source_length $source_length --max_target_length $target_length --beam_size $beam_size --train_batch_size $batch_size --eval_batch_size $batch_size --learning_rate $lr --train_steps $train_steps --eval_steps $eval_steps

from codebert.

Killing-bot avatar Killing-bot commented on June 17, 2024

Thanks a lot ! will try the repo you mentioned.

from codebert.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.