Comments (6)
This is super odd, you shouldn't even need TensorFlow for this example, it's not a dependency.
Maybe try updating your Transformers:
pip install -U transformers
from mlx-examples.
I tried to update the transformers, but I still encountered the same error.
from mlx-examples.
This seems like your environment corrupted or in a wonky state. Are you able to do anything with mlx
? Like what happens if you do:
import mlx
a = mlx.zeros((2, 2))
print(a)
Maybe try uninstalling TensorFlow and/or making a clean conda env?
from mlx-examples.
I have also issues with running the convert.py -
it requires CUDA for the flash_attn module, which is not available obviously on my m1 macbook pro.
is there a way to publish the .npz files with the weights on huggingface? or to workaround this issue?
thanks
from mlx-examples.
I think you are using an old version of Transformers try: pip install -U transformers
.
Actually it has a minimal version on it. Make sure you do pip install -r requirements.txt
before trying to run the example. I will close this, let me know if the issue persists and I can reopen.
from mlx-examples.
That fixed the problem for me, thank you very much for the fast help!
from mlx-examples.
Related Issues (20)
- rope_scaling errors when loading Llama-3.1-8B-Instruct HOT 11
- Allocation error when running generation script HOT 4
- [FeatureRequest] support FLUX.1 text to image model HOT 6
- [Feature Request] MLX_lm: Store KV cache of computed prompts to disk to avoid re-compute in follow-up runs HOT 1
- ValueError: "DoRALinear does not yet support quantization" after DoRA fine-tuning HOT 1
- Support example for SpeechT5 model?
- Feature Request: Implement Tool Calling for llama-3.1 in MLX-LM HOT 3
- Example of an async data-loader that automatically pre-fetches batches for training HOT 1
- How to use the Apple Neural Engine with MLX? HOT 1
- mlx_lm.fuse raises RuntimeError: std::bad_cast HOT 1
- [FeatureRequest] Support Qwen2-Audio model
- Optimal way of accumulating gradients? HOT 2
- [FeatureRequest] Support for prompt fine-tuning
- MLX_lm:generate() or stream_generate() how to setting system prompt and model config? HOT 2
- Generation fails after quantization for multi-part models HOT 4
- Update whisper/convert.py to save as safetensors instead of npz
- Performance tracing and tuning code
- "Warning: 'Example already has an EOS token appended' and Increased Memory Usage with Mistral Nemo Model" HOT 6
- Mlx generate text, by default halusonates more HOT 3
- `trust_remote_code` needed to convert InternLM-2.5 20b model HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlx-examples.