Git Product home page Git Product logo

EmbeddedLLM

Run local LLMs on iGPU, APU and CPU (AMD , Intel, and Qualcomm (Coming Soon)). Easiest way to launch OpenAI API Compatible Server on Windows, Linux and MacOS

Support matrix Supported now Under Development On the roadmap
Model architectures Gemma
Llama *
Mistral +
Phi
Platform Linux
Windows
Architecture x86
x64
Arm64
Hardware Acceleration CUDA
DirectML
IpexLLM
QNN
ROCm
OpenVINO

* The Llama model architecture supports similar model families such as CodeLlama, Vicuna, Yi, and more.

+ The Mistral model architecture supports similar model families such as Zephyr.

🚀 Latest News

  • [2024/06] Support Phi-3 (mini, small, medium), Phi-3-Vision-Mini, Llama-2, Llama-3, Gemma (v1), Mistral v0.3, Starling-LM, Yi-1.5.
  • [2024/06] Support vision/chat inference on iGPU, APU, CPU and CUDA.

Table Content

Supported Models (Quick Start)

Models Parameters Context Length Link
Gemma-2b-Instruct v1 2B 8192 EmbeddedLLM/gemma-2b-it-onnx
Llama-2-7b-chat 7B 4096 EmbeddedLLM/llama-2-7b-chat-int4-onnx-directml
Llama-2-13b-chat 13B 4096 EmbeddedLLM/llama-2-13b-chat-int4-onnx-directml
Llama-3-8b-chat 8B 8192 EmbeddedLLM/mistral-7b-instruct-v0.3-onnx
Mistral-7b-v0.3-instruct 7B 32768 EmbeddedLLM/mistral-7b-instruct-v0.3-onnx
Phi-3-mini-4k-instruct-062024 3.8B 4096 EmbeddedLLM/Phi-3-mini-4k-instruct-062024-onnx
Phi3-mini-4k-instruct 3.8B 4096 microsoft/Phi-3-mini-4k-instruct-onnx
Phi3-mini-128k-instruct 3.8B 128k microsoft/Phi-3-mini-128k-instruct-onnx
Phi3-medium-4k-instruct 17B 4096 microsoft/Phi-3-medium-4k-instruct-onnx-directml
Phi3-medium-128k-instruct 17B 128k microsoft/Phi-3-medium-128k-instruct-onnx-directml
Openchat-3.6-8b 8B 8192 EmbeddedLLM/openchat-3.6-8b-20240522-onnx
Yi-1.5-6b-chat 6B 32k EmbeddedLLM/01-ai_Yi-1.5-6B-Chat-onnx
Phi-3-vision-128k-instruct 128k EmbeddedLLM/Phi-3-vision-128k-instruct-onnx

Getting Started

Installation

From Source

  • Windows

    1. Custom Setup:
    • XPU: Requires anaconda environment. conda create -n ellm python=3.10 libuv; conda activate llm.
    • DirectML: If you are using Conda Environment. Install additional dependencies: conda install conda-forge::vs2015_runtime.
    1. Install embeddedllm package. $env:ELLM_TARGET_DEVICE='directml'; pip install -e .. Note: currently support cpu, directml and cuda.

      • DirectML: $env:ELLM_TARGET_DEVICE='directml'; pip install -e .[directml]
      • CPU: $env:ELLM_TARGET_DEVICE='cpu'; pip install -e .[cpu]
      • CUDA: $env:ELLM_TARGET_DEVICE='cuda'; pip install -e .[cuda]
      • XPU: $env:ELLM_TARGET_DEVICE='xpu'; pip install -e .[xpu]
      • With Web UI:
        • DirectML: $env:ELLM_TARGET_DEVICE='directml'; pip install -e .[directml,webui]
        • CPU: $env:ELLM_TARGET_DEVICE='cpu'; pip install -e .[cpu,webui]
        • CUDA: $env:ELLM_TARGET_DEVICE='cuda'; pip install -e .[cuda,webui]
        • XPU: $env:ELLM_TARGET_DEVICE='xpu'; pip install -e .[xpu,webui]
  • Linux

    1. Custom Setup:
    • XPU: Requires anaconda environment. conda create -n ellm python=3.10 libuv; conda activate llm.
    • DirectML: If you are using Conda Environment. Install additional dependencies: conda install conda-forge::vs2015_runtime.
    1. Install embeddedllm package. ELLM_TARGET_DEVICE='directml' pip install -e .. Note: currently support cpu, directml and cuda.

      • DirectML: ELLM_TARGET_DEVICE='directml' pip install -e .[directml]
      • CPU: ELLM_TARGET_DEVICE='cpu' pip install -e .[cpu]
      • CUDA: ELLM_TARGET_DEVICE='cuda' pip install -e .[cuda]
      • XPU: ELLM_TARGET_DEVICE='xpu' pip install -e .[xpu]
      • With Web UI:
        • DirectML: ELLM_TARGET_DEVICE='directml' pip install -e .[directml,webui]
        • CPU: ELLM_TARGET_DEVICE='cpu' pip install -e .[cpu,webui]
        • CUDA: ELLM_TARGET_DEVICE='cuda' pip install -e .[cuda,webui]
        • XPU: ELLM_TARGET_DEVICE='xpu' pip install -e .[xpu,webui]

Launch OpenAI API Compatible Server

  1. Custom Setup:

    • Ipex

      • For Intel iGPU:

        set SYCL_CACHE_PERSISTENT=1
        set BIGDL_LLM_XMX_DISABLED=1
      • For Intel Arc™ A-Series Graphics:

        set SYCL_CACHE_PERSISTENT=1
  2. ellm_server --model_path <path/to/model/weight>.

  3. Example code to connect to the api server can be found in scripts/python. Note: To find out more of the supported arguments. ellm_server --help.

Launch Chatbot Web UI

  1. ellm_chatbot --port 7788 --host localhost --server_port <ellm_server_port> --server_host localhost. Note: To find out more of the supported arguments. ellm_chatbot --help.

    ![Chatbot Web UI](asset/ellm_chatbot_vid.webp)
    

Launch Model Management UI

It is an interface that allows you to download and deploy OpenAI API compatible server. You can find out the disk space required to download the model in the UI.

  1. ellm_modelui --port 6678. Note: To find out more of the supported arguments. ellm_modelui --help.

    ![Model Management UI](asset/ellm_modelui.png)
    

Compile OpenAI-API Compatible Server into Windows Executable

  1. Install embeddedllm.
  2. Install PyInstaller: pip install pyinstaller.
  3. Compile Windows Executable: pyinstaller .\ellm_api_server.spec.
  4. You can find the executable in the dist\ellm_api_server.

Acknowledgements

EmbeddedLLM's Projects

ai-town icon ai-town

A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize.

dspy icon dspy

DSPy: The framework for programming—not prompting—foundation models

eagle icon eagle

EAGLE: Lossless Acceleration of LLM Decoding by Feature Extrapolation

embeddedllm icon embeddedllm

EmbeddedLLM: API server for Embedded Device Deployment. Currently support IpexLLM/DirectML./CPU

infinity-executable icon infinity-executable

Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting a wide range of text-embedding models and frameworks.

jamaibase icon jamaibase

Firebase for AI Agents: Open-source backend platform that puts powerful generative models at the core of your database. With managed memory and RAG capabilities, developers can easily build AI agents, enhance their apps with generative tables, and create magical UI experiences.

llava-plus-serve icon llava-plus-serve

LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills

nlux-jamai icon nlux-jamai

The 𝗣𝗼𝘄𝗲𝗿𝗳𝘂𝗹 Conversational AI JavaScript Library

unstructured-executable icon unstructured-executable

Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.

vllm-rocm icon vllm-rocm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.