Comments (5)
full parameters and training?
Do you mean fine-tuning all the parameters of an LLM? There is a PR for that you could check out if so: #645
from mlx-examples.
Not fine-tuning. I mean pre-train the model.
For example, I need to use the model in certain professional fields.
Like llama model, I want to add some tokens and retrain it.
from mlx-examples.
Like llama model, I want to add some tokens and retrain it.
Based on your comment it sounds like you want to fine tune the model. But are you saying you want an example which trains something like a Llama model from scratch? That is very computationally expensive.
from mlx-examples.
Yes. Because mac studio is much cheaper than nvida
On a Mac with 192G memory, I want to try to see if I can train a model like llama from scratch.
from mlx-examples.
Training something like Llama from scratch on a single Mac Studio is a tall order. Llama 3 was trained on 15 trillion tokens on a cluster with 24K H100. The difference in compute from that to a single Mac studio is probably well over 100000x.
If you can train a much smaller model or fine tune that would be much more feasible. Here is an example in MLX of training a Transformer LM from scratch: https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm
from mlx-examples.
Related Issues (20)
- LoRA tune ibm geanite 8b insteuct HOT 2
- Where can I get started to convert internvl model to mlx format? HOT 4
- M2 Ultra 192 GB fails to run while M3 Max 128GB can run HOT 3
- Inference shapes exception with Gemma 2 SPPO HOT 5
- Unlike the document, the code here didn't force a graph evaluation for the optimizer's parameters. HOT 1
- Can mlx_lm.fust model convert to Huggingface model? HOT 1
- Peak mem 201 GB running on M2 Ultra 192 GB, how is this possible? HOT 1
- Quantization causing tensor shape mismatch HOT 1
- gemma-2-27b-it-4bit generate only <pad> HOT 11
- grad-checkpoint makes trained tokens increase gradually HOT 2
- DoRa training is never activated
- Finetuning gemma-2-27b-8bits error HOT 1
- Support model with mlx - stable video diffusion
- conversion of custom transformer HOT 2
- support for mamba 2 (Codestral mamba) #859
- Classification Example HOT 1
- When I use mlx-community/clip-vit-base-patch32, the bug "FileNotFoundError: No safetensors found in mlx_model" happens. HOT 1
- Support for nanogpt (and gpt-j)
- Tokenizer with bos and eos token id sharing and "[WARNING] Example already has an EOS token appended" HOT 2
- install mlx-lm version 0.16.0 : ERROR: Could not find a version that satisfies the requirement mlx-lm==0.16.0 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlx-examples.