Git Product home page Git Product logo

localai-frontend's Introduction

Frontend WebUI for LocalAI API

This is a frontend web user interface (WebUI) that allows you to interact with AI models through a LocalAI backend API built with ReactJS. It provides a simple and intuitive way to select and interact with different AI models that are stored in the /models directory of the LocalAI folder

image

Getting Started

To use the frontend WebUI, follow the steps below:

Docker method (Preferred)

Move the sample-docker-compose.yaml to docker-compose.yaml in the LocalAI directory ( Assuming you have already set it up) , and run:

docker-compose up -d --build

That should take care of it, you can use a reverse proxy like Apache to access it from wherever you want! Currently, both arm64 and x86 systems are supported

Alternative method

Clone the repository:

If you don't know how to do this, you shouldn't probably be here?

Install dependencies:

Navigate to the cloned repository directory and install the dependencies by running npm install or yarn install, depending on your package manager of choice

Configure the backend API:

Update the API endpoint link in the ChatGptInterface.js file to point to your LocalAI backend API

Add AI models:

Place your AI models in the /models directory of the LocalAI folder. Make sure that the models are compatible with the backend API and follow the required file format and naming conventions and start your docker container

Start the WebUI:

Start the development server by running npm start or yarn start, depending on your package manager of choice. This will launch the WebUI in your default web browser

Select and interact with models:

In the WebUI, you can now select the AI models from the model selection menu and interact with them using the chat interface

Features

Model selection:

The WebUI allows you to select from a list of AI models that are stored in the /models directory of the application. You can easily switch between different models and interact with them in real-time

API integration:

The WebUI connects with the LocalAI backend API to send requests and receive responses from the AI models. It uses API calls as specified in the LocalAI project and just works with it

Interactive chat interface:

The WebUI provides a chat-like interface for interacting with the AI models. You can input text and receive responses from the models in a conversational manner

Easy deployment:

The WebUI is designed to be hosted any where you want, just use docker or edit the URL of your API endpoint and it should work!

localai-frontend's People

Contributors

dhruvgera avatar echedellelr avatar mudler avatar reppard avatar uditkarode avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

localai-frontend's Issues

Getting started with Docker is incomplete

You provide a sample docker-compose (nice!), and say that is the easy way to get started, however your only step is to copy the sample docker-compose to the local-ai directory... And that example does not include any image to pull, or the Dockerfile from which to build - the way it is written I believe is only going to build the api again?

I'm sure it's an easy step, but I'm not sure what the proper step should be to get the docker compose working.

Failed running localai-frontend with kubernetes

Install LocalAI inside kubernetes by using helm charts, local-ai service exposed to cluster with FQDN local-ai.default.svc.cluster.local:80 , so I construct a yaml to deploy LocalAI-fronted as follow :

# k8s deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: localai-frontend
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: localai-frontend
  template:
    metadata:
      labels:
        app: localai-frontend
    spec:
      containers:
      - name: localai-frontend
        image: dhruvgera/localai-frontend
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        env:
          - name: API_HOST
            value: http://local-ai.default.svc.cluster.local:80
      restartPolicy: Always
---

# k8s service
apiVersion: v1
kind: Service
metadata:
  name: localai-frontend
  namespace: default
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 31000
  selector:
    app: localai-frontend

Since localai-frontend service exposed by service type Nodeport, so I visit a random node(let say it is 172.16.33.21) with port 31000. Seems it doesn't work :

  1. web page create a websocket connection ws://172.16.33.21:3000/ws and failed
  2. still listing models by using http://localhost:8080/v1/models

Does this project support running inside a kubernetes cluster ?

webui select model list is empty

docker compose install localai and web ui:

version: '3.6'

services:
  api:
    image: quay.io/go-skynet/local-ai:latest
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai" ]

  frontend:
    image: quay.io/go-skynet/localai-frontend:master
    ports:
      - 3000:3000

build result:

$ docker-compose up -d --pull always
[+] Running 2/2
 ✔ frontend Pulled                                                                                                 3.2s
 ✔ api Pulled                                                                                                      3.2s
[+] Building 0.0s (0/0)
[+] Running 2/0
 ✔ Container localai-frontend-1  Running                                                                           0.0s
 ✔ Container localai-api-1       Running                                                                           0.0s


$ curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

but,the webui:
image

[Feature] Image generation

It has been a while since stabledifussion model was integrated in LocalAI.

I would like to know if it would be possible to implement this in some way, maybe a check to change the action to image generation.

The own chat UI could be kept and it could be used to list pictures generated.

I think this is somewhat easy to implement. I could check some things. Aside this, maybe whisper would be the more challenging to keep this a simple UI.

Did not check if you are already working in the image generation though.

LocalAI with LocalAI-frontend?

Screenshot 2023-09-21 at 3 23 53 PM

The objective would be to get your project working as an overlay onto LocalAI running separately. I commented out the LocalAI in the docker-compose.yaml:

`❯ cat docker-compose.yaml
version: '3.6'

services:
frontend:
build:
context: .
dockerfile: Dockerfile
ports:
- 3000:3000`

❯ netstat -an | grep LISTEN
tcp46 0 0 *.3000 . LISTEN

The docker container is up and running as shown in the above image.

The API is running as an autonomous project separately and working independently. See below:

`❯ curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "llama-2-7b-chat",
"prompt": "What is the expected population of Ghana by the year 2100",
"temperature": 0.7
}'

{"object":"text_completion","model":"llama-2-7b-chat","choices":[{"index":0,"finish_reason":"stop","text":"?\nlazarus May 3, 2022, 1:49pm #1\nThe population of Ghana is projected to continue growing in the coming decades. According to the United Nations Department of Economic and Social Affairs Population Division, Ghana’s population is expected to reach approximately 47 million by the year 2100. This represents a more than fivefold increase from the country’s estimated population of around 8.5 million in 2020.\nHowever, it is important to note that population projections are subject to uncertainty and can be influenced by various factors such as fertility rates, mortality rates, and migration patterns. Therefore, actual population growth may differ from projected values."}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}%`

My question is how to get the "Select Model" and "Model Gallery" to effectively integrate with the LocalAI project when run separately and not directly integrated into your project? Is this possible?

I love the project concept of being able to change "model" and have "model galleries".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.