Git Product home page Git Product logo

Comments (10)

VRSEN avatar VRSEN commented on May 16, 2024 3

Hey guys, I just made a video on how to do this in google collab: https://youtu.be/3de0Utr9XnI
Hope it helps!

from llm-foundry.

GeorvityLabs avatar GeorvityLabs commented on May 16, 2024 1

@baptistejamin is it possible to make a jupyter notebook , that way we could use that to fine-tune MPT-7B using cloud GPUs (paid ofc) , but in a notebook format it will be easy for others to pick up as well.

from llm-foundry.

baptistejamin avatar baptistejamin commented on May 16, 2024

A solution was found here: #94 (comment)

from llm-foundry.

alextrott16 avatar alextrott16 commented on May 16, 2024

To set your starting model to our MPT-7B-Instruct model on the Hugging Face hub, you'd use this model config in your YAML

  # Model
  model:
    name: hf_causal_lm
    pretrained_model_name_or_path: mosaicml/mpt-7b-instruct
    init_device: cpu
    pretrained: true
    # Comment the attn_config block to use default "torch" attention
    config_overrides:
        attn_config:
            attn_impl: triton

Note that pretrained_model_name_or_path determines what value is passed to Hugging Face's AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=...) when building the model. from_pretrained supports local paths.

For freezing all the weights except the last layer, you can add some custom logic to scripts/train/train.py. The guy in this video does that freezing with our model inside a notebook. Might be a useful reference.

from llm-foundry.

dydx-git avatar dydx-git commented on May 16, 2024

@alextrott16 so I managed to successfully run the train.py. Two questions however:

  1. I froze all the weights except the last one and it took maybe 10 seconds to fine-tune on a dataset of just 6 samples. Is that normal? Using an A100-40GB
  2. Where is the updated model saved? I see in the yaml there's a prop called save_folder and having specified that I get a file called "latest-rank0.pt", it's about 23.61 GB. Is this the updated model? How do I use it? Sorry if I'm not asking the right questions.

Thank you!

from llm-foundry.

SoumitriKolavennu avatar SoumitriKolavennu commented on May 16, 2024

To set your starting model to our MPT-7B-Instruct model on the Hugging Face hub, you'd use this model config in your YAML

  # Model

  model:

    name: hf_causal_lm

    pretrained_model_name_or_path: mosaicml/mpt-7b-instruct

    device: cpu

    pretrained: true

    # Comment the attn_config block to use default "torch" attention

    attn_config:

      attn_impl: triton

Note that pretrained_model_name_or_path determines what value is passed to Hugging Face's AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=...) when building the model. from_pretrained supports local paths.

Hi, we fine tuned an instruction model without freezing any weights. The resulting checkpoint is 77GB ! How do we convert it back to the hf-format and use it ?

Also, when trying to fine-tune the training loss profile is way better with
name: mpt_causal_lm than with hf_causal_lm. What is the difference?

Thank you.

from llm-foundry.

abhi-mosaic avatar abhi-mosaic commented on May 16, 2024

Hi, we fine tuned an instruction model without freezing any weights. The resulting checkpoint is 77GB ! How do we convert it back to the hf-format and use it ?

Hi @SoumitriKolavennu , the 77GB is expected given that it is a 7B model and the Composer checkpoint holds both model and optimizer state. To convert to a HF checkpoint folder, you can use the instructions in the scripts/inference folder: https://github.com/mosaicml/llm-foundry/tree/main/scripts/inference#converting-a-composer-checkpoint-to-an-hf-checkpoint-folder

Also, when trying to fine-tune the training loss profile is way better with
name: mpt_causal_lm than with hf_causal_lm. What is the difference?

Could you clarify exactly what you see different between the training loss profiles (with screenshots)? In one case (mpt_causal_lm) , you are probably initializing a from-scratch MPT and finetuning it. In the other case (hf_causal_lm pointed at the HF model mosaicml/mpt-7b-instruct) you are starting from the pretrained weights of our MPT-7B-Instruct model on the HF Hub. I would expect the latter to have much lower initial loss and result in a higher quality model

from llm-foundry.

abhi-mosaic avatar abhi-mosaic commented on May 16, 2024

Hi @SoumitriKolavennu , I'm closing this issue for now but feel free to reopen if you need further assistance.

from llm-foundry.

SoumitriKolavennu avatar SoumitriKolavennu commented on May 16, 2024

Hi @SoumitriKolavennu , I'm closing this issue for now but feel free to reopen if you need further assistance.

Hi @abhi-mosaic, Thank you for the help in converting to huggingface. Following your instructions worked great. One minor suggestion would be to include the converter in training folder instead of the inference folder. My other question about name is still relevant but perhaps it deserves a thread of its own. Please close this issue and thank you for the help.

from llm-foundry.

tirthajyoti avatar tirthajyoti commented on May 16, 2024

Hey guys, I just made a video on how to do this in google collab: https://youtu.be/3de0Utr9XnI Hope it helps!

@VRSEN can you make a Colab or video by generalizing the input data preprocessing a bit more. For example, if we want to fine-tune for a news summarization task, how would I preprocess the dataset (e.g., HF dataset multi_news) which has two columns only "document" and "summary"?

from llm-foundry.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.