Git Product home page Git Product logo

genai-stack's Introduction

GenAI Stack

The GenAI Stack will get you started building your own GenAI application in no time. The demo applications can serve as inspiration or as a starting point. Learn more about the details in the technical blog post.

Configure

Create a .env file from the environment template file env.example

Available variables:

Variable Name Default value Description
OLLAMA_BASE_URL http://host.docker.internal:11434 REQUIRED - URL to Ollama LLM API
NEO4J_URI neo4j://database:7687 REQUIRED - URL to Neo4j database
NEO4J_USERNAME neo4j REQUIRED - Username for Neo4j database
NEO4J_PASSWORD password REQUIRED - Password for Neo4j database
LLM llama2 REQUIRED - Can be any Ollama model tag, or gpt-4 or gpt-3.5 or claudev2
EMBEDDING_MODEL sentence_transformer REQUIRED - Can be sentence_transformer, openai, aws, ollama or google-genai-embedding-001
AWS_ACCESS_KEY_ID REQUIRED - Only if LLM=claudev2 or embedding_model=aws
AWS_SECRET_ACCESS_KEY REQUIRED - Only if LLM=claudev2 or embedding_model=aws
AWS_DEFAULT_REGION REQUIRED - Only if LLM=claudev2 or embedding_model=aws
OPENAI_API_KEY REQUIRED - Only if LLM=gpt-4 or LLM=gpt-3.5 or embedding_model=openai
GOOGLE_API_KEY REQUIRED - Only required when using GoogleGenai LLM or embedding model google-genai-embedding-001
LANGCHAIN_ENDPOINT "https://api.smith.langchain.com" OPTIONAL - URL to Langchain Smith API
LANGCHAIN_TRACING_V2 false OPTIONAL - Enable Langchain tracing v2
LANGCHAIN_PROJECT OPTIONAL - Langchain project name
LANGCHAIN_API_KEY OPTIONAL - Langchain API key

LLM Configuration

MacOS and Linux users can use any LLM that's available via Ollama. Check the "tags" section under the model page you want to use on https://ollama.ai/library and write the tag for the value of the environment variable LLM= in the .env file. All platforms can use GPT-3.5-turbo and GPT-4 (bring your own API keys for OpenAI models).

MacOS Install Ollama on MacOS and start it before running docker compose up using ollama serve in a separate terminal.

Linux No need to install Ollama manually, it will run in a container as part of the stack when running with the Linux profile: run docker compose --profile linux up. Make sure to set the OLLAMA_BASE_URL=http://llm:11434 in the .env file when using Ollama docker container.

To use the Linux-GPU profile: run docker compose --profile linux-gpu up. Also change OLLAMA_BASE_URL=http://llm-gpu:11434 in the .env file.

Windows Ollama now supports Windows. Install Ollama on Windows and start it before running docker compose up using ollama serve in a separate terminal. Alternatively, Windows users can generate an OpenAI API key and configure the stack to use gpt-3.5 or gpt-4 in the .env file.

Develop

Warning

There is a performance issue that impacts python applications in the 4.24.x releases of Docker Desktop. Please upgrade to the latest release before using this stack.

To start everything

docker compose up

If changes to build scripts have been made, rebuild.

docker compose up --build

To enter watch mode (auto rebuild on file changes). First start everything, then in new terminal:

docker compose watch

Shutdown If health check fails or containers don't start up as expected, shutdown completely to start up again.

docker compose down

Applications

Here's what's in this repo:

Name Main files Compose name URLs Description
Support Bot bot.py bot http://localhost:8501 Main usecase. Fullstack Python application.
Stack Overflow Loader loader.py loader http://localhost:8502 Load SO data into the database (create vector embeddings etc). Fullstack Python application.
PDF Reader pdf_bot.py pdf_bot http://localhost:8503 Read local PDF and ask it questions. Fullstack Python application.
Standalone Bot API api.py api http://localhost:8504 Standalone HTTP API streaming (SSE) + non-streaming endpoints Python.
Standalone Bot UI front-end/ front-end http://localhost:8505 Standalone client that uses the Standalone Bot API to interact with the model. JavaScript (Svelte) front-end.

The database can be explored at http://localhost:7474.

App 1 - Support Agent Bot

UI: http://localhost:8501 DB client: http://localhost:7474

  • answer support question based on recent entries
  • provide summarized answers with sources
  • demonstrate difference between
    • RAG Disabled (pure LLM response)
    • RAG Enabled (vector + knowledge graph context)
  • allow to generate a high quality support ticket for the current conversation based on the style of highly rated questions in the database.

(Chat input + RAG mode selector)

(CTA to auto generate support ticket draft) (UI of the auto generated support ticket draft)

App 2 - Loader

UI: http://localhost:8502 DB client: http://localhost:7474

  • import recent Stack Overflow data for certain tags into a KG
  • embed questions and answers and store them in vector index
  • UI: choose tags, run import, see progress, some stats of data in the database
  • Load high ranked questions (regardless of tags) to support the ticket generation feature of App 1.

App 3 Question / Answer with a local PDF

UI: http://localhost:8503
DB client: http://localhost:7474

This application lets you load a local PDF into text chunks and embed it into Neo4j so you can ask questions about its contents and have the LLM answer them using vector similarity search.

App 4 Standalone HTTP API

Endpoints:

Example cURL command:

curl http://localhost:8504/query-stream\?text\=minimal%20hello%20world%20in%20python\&rag\=false

Exposes the functionality to answer questions in the same way as App 1 above. Uses same code and prompts.

App 5 Static front-end

UI: http://localhost:8505

This application has the same features as App 1, but is built separate from the back-end code using modern best practices (Vite, Svelte, Tailwind).
The auto-reload on changes are instant using the Docker watch sync config.

genai-stack's People

Contributors

borisromanov avatar chenblueridge avatar denverdino avatar desnoo avatar eltociear avatar illapavan avatar ilopezluna avatar jexp avatar johnsonr avatar lapeyus avatar matthieuml avatar oskarhane avatar rafeathar avatar rahul0x00 avatar robsdedude avatar sanyam-2026 avatar shelar1423 avatar sidagarwal04 avatar slimslenderslacks avatar ssime-git avatar tachii avatar theculliganman avatar tomasonjo avatar withtwoemms avatar zach-blumenfeld avatar zooninja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genai-stack's Issues

Connection issue on App 1 and App3

Am I missing a langchain api key here please?

Details:

ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcea8c858d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 541, in _run_script
exec(code, module.dict)
File "/app/pdf_bot.py", line 95, in
main()
File "/app/pdf_bot.py", line 91, in main
qa.run(query, callbacks=[stream_handler])
File "/usr/local/lib/python3.11/site-packages/langchain/chains/b

macbook M1 Streamlit

File "/usr/local/bin/streamlit", line 5, in
from streamlit.cli import main
File "/usr/local/lib/python3.11/site-packages/streamlit/init.py", line 48, in
.....
TypeError: Descriptors cannot not be created directly.
genai-stack-loader-1 | If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
genai-stack-loader | If you cannot immediately regenerate your protos, some other possible workarounds are:
genai-stack-loader | 1. Downgrade the protobuf package to 3.20.x or lower.
genai-stack-loader | 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

all 3 deployments have this problem
genai-stack-bot
genai-stack-pdf_bot
genai-stack-loader

Mac book M1.
Streamlit was always causing problems on M1.

http: invalid host header on docker-compose up

Running Ubuntu 20.04 Focal

$ docker-compose --profile linux up

generates:
[+] Running 1/1
! pull-model Warning 1.4s
[+] Building 0.0s (0/0)
http: invalid Host header

Hoping for suggestions.

No Contributing guidlines

Requesting to add CONTRIBUTION.md file with guidlines for new contributors to be followed.

I would like to work on this issue .

host.docker.internal doesn't resolve on Linux

As of my version of Docker:

Client: Docker Engine - Community
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.7
 Git commit:        ed223bc
 Built:             Mon Sep  4 12:31:44 2023
 OS/Arch:           linux/amd64
 Context:           default

when launching the stack with docker compose --profile linux up, the pull-model service fails to resolve host.docker.internal:

genai-stack-pull-model-1  | pulling ollama model llama2 using http://host.docker.internal:11434
genai-stack-pull-model-1  | Error: Head "http://host.docker.internal:11434/": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host

There are multiple solutions:

  1. Add in the docker-compose for pull-model and each bot:
extra_hosts:
      - "host.docker.internal:host-gateway"
  1. We can just the change OLLAMA_BASE_URL=http://llm:11434, but then we don't need the port mapping:
ports:
      - 11434:11434

inside of the llm service.

I feel like the second option feels more natural, should the port mapping then be removed to avoid confusion as the service is targeted for Linux ?

A problem for Mac ?

Container genai-stack-pull-model-1 service "pull-model" didn't complete successfully: exit 10.1s ack-bot-1 showed when I typed docker compose up -d.

Trying to modify the PDF reader with "Sources" information

Hi,

I wanted to modify the PDF bot slightly by removing the automatic clean-up of the previous information. Essentially I can load several PDFs and run questions across those.
It works in simple terms, but I'm a bit struggling how to add "Source" information to the Neo4J graph so it can beused as part of the answer. The Source could be as simple as name of the file.

Any help from anyone?

401 Client Error: Unauthorized for url: https://api.smith.langchain.com/runs

Hi, I am experiencing this error when using App 1 - Support Agent Bot

genai-stack-bot-1 | 2023-10-09 16:20:10.140 Embedding: Using SentenceTransformer
genai-stack-bot-1 | 2023-10-09 16:20:11.748 LLM: Using Ollama: llama2
genai-stack-bot-1 | 2023-10-09 16:21:05.271 Embedding: Using SentenceTransformer
genai-stack-bot-1 | 2023-10-09 16:21:05.472 LLM: Using Ollama: llama2
genai-stack-bot-1 | Failed to post https://api.smith.langchain.com/runs in LangSmith API. 401 Client Error: Unauthorized for url: https://api.smith.langchain.com/runs
genai-stack-bot-1 | {"detail":"Invalid auth"}
genai-stack-bot-1 | 2023-10-09 16:21:33.497 Embedding: Using SentenceTransformer
genai-stack-bot-1 | 2023-10-09 16:21:33.513 LLM: Using Ollama: llama2
genai-stack-bot-1 | Failed to patch https://api.smith.langchain.com/runs/aa5497ad-6d58-4cbe-8a93-78acfa487f90 in LangSmith API. 401 Client Error: Unauthorized for url: https://api.smith.langchain.com/runs/aa5497ad-6d58-4cbe-8a93-78acfa487f90
genai-stack-bot-1 | {"detail":"Invalid auth"}

This is my .env file:

#OPENAI_API_KEY=sk-...
#OLLAMA_BASE_URL=http://host.docker.internal:11434
#NEO4J_URI=neo4j://localhost:7687
#NEO4J_USERNAME=neo4j
#NEO4J_PASSWORD=password
LLM=llama2 #or any Ollama model tag, or gpt-4 or gpt-3.5
EMBEDDING_MODEL=sentence_transformer #or openai or ollama

LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_TRACING_V2=true # false
LANGCHAIN_PROJECT=#your-project-name
LANGCHAIN_API_KEY=#your-api-key ls_...

I don't have LANGCHAIN_API_KEY in my case, it would be nice if the readme had more explanations of these variables, if they are mandatory to have values, etc.

Request - detailed steps using other LLM model than the default llama2

The write-up at https://neo4j.com/developer-blog/genai-app-how-to-build/ and the readme.md in the repo says to just add the tag of the LLM model you want to use in the .env file, thus implying that the scripts will fetch, add and use any model automatically.

However, is the new model downloaded and installed when running docker compose with "build" and then "up", or just at "up"?

The reason I ask, I changed my .env and the LLM parameter to "llama2-uncensored:7b", and doing "build" seemed to do something really quickly, but upon doing "up" only some of the containers got up and then the services never became available. There did not seem to be any download of the new model asked for in the .env.

When I changed back to "llama2" again, ran "build" then "up", the services started as per normal.

Is there perhaps a manual step missing in the documentation about how to fetch, install and use some other LLM model?

Unable to Find libnvidia-ml.so.1 When Using "docker compose linux-gpu up"

Here is the result of my command. Is this error inside the container or outside? The weird part to me is:

genai-stack-pull-model-1 | pulling ollama model llama2 using http://llm-gpu:11434

The docs told me to add that URL to the .env file. However, I certainly don't have server running there.

$ docker compose --profile linux-gpu up
WARN[0000] The "LANGCHAIN_PROJECT" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_ACCESS_KEY_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_SECRET_ACCESS_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_DEFAULT_REGION" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_PROJECT" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_ACCESS_KEY_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_SECRET_ACCESS_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_DEFAULT_REGION" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_PROJECT" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_ACCESS_KEY_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_SECRET_ACCESS_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_DEFAULT_REGION" variable is not set. Defaulting to a blank string. 
WARN[0000] The "OPENAI_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_PROJECT" variable is not set. Defaulting to a blank string. 
WARN[0000] The "LANGCHAIN_API_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_ACCESS_KEY_ID" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_SECRET_ACCESS_KEY" variable is not set. Defaulting to a blank string. 
WARN[0000] The "AWS_DEFAULT_REGION" variable is not set. Defaulting to a blank string. 
[+] Running 4/4
 ✔ llm-gpu 3 layers [⣿⣿⣿]      0B/0B      Pulled                                                                                                        1.3s 
   ✔ aece8493d397 Already exists                                                                                                                        0.0s 
   ✔ 3b9196308e0f Already exists                                                                                                                        0.0s 
   ✔ e75cbce7870b Already exists                                                                                                                        0.0s 
[+] Building 0.0s (0/0)                                                                                                                 docker:desktop-linux
[+] Running 8/8
 ✔ Container genai-stack-llm-gpu-1     Created                                                                                                          0.0s 
 ✔ Container genai-stack-database-1    Running                                                                                                          0.0s 
 ✔ Container genai-stack-pull-model-1  Recreated                                                                                                        0.1s 
 ✔ Container genai-stack-api-1         Recreated                                                                                                        0.1s 
 ✔ Container genai-stack-bot-1         Recreated                                                                                                        0.1s 
 ✔ Container genai-stack-pdf_bot-1     Recreated                                                                                                        0.1s 
 ✔ Container genai-stack-loader-1      Recreated                                                                                                        0.1s 
 ✔ Container genai-stack-front-end-1   Recreated                                                                                                        0.1s 
Attaching to genai-stack-api-1, genai-stack-bot-1, genai-stack-database-1, genai-stack-front-end-1, genai-stack-llm-gpu-1, genai-stack-loader-1, genai-stack-pdf_bot-1, genai-stack-pull-model-1
genai-stack-pull-model-1  | pulling ollama model llama2 using http://llm-gpu:11434
genai-stack-pull-model-1  | Error: Head "http://llm-gpu:11434/": dial tcp 172.18.0.4:11434: connect: no route to host
genai-stack-pull-model-1 exited with code 1
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown

Pulling ollama model llama2: Error: accepts 1 arg(s), received 10

While trying to get GenAI stack up (docker compost up --build) I am getting the error:

genai-stack-pull-model-1 | pulling ollama model llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 using http://host.docker.internal:11434
genai-stack-pull-model-1 | Error: accepts 1 arg(s), received 10
genai-stack-pull-model-1 exited with code 1

My .env file is basically (everything else is commented out):
LLM=llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2
EMBEDDING_MODEL=sentence_transformer #or openai, ollama, or aws
OPENAI_API_KEY= ##MY-API-KEY>##

Getting the same error for gpt-4 (LLM=gpt-4 and EMBEDDING_MODEL=openai):
genai-stack-pull-model-1 | pulling ollama model gpt-4 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 using http://host.docker.internal:11434
genai-stack-pull-model-1 | Error: accepts 1 arg(s), received 10
genai-stack-pull-model-1 exited with code 1

Im running docker in Windows 11.

Any thoughts?

Thank you in advance!

Database can start in an unusable state

Repro steps:

  • docker compose --profile linux up
  • docker logs docker logs genai-stack-database-1

The logs for the database are as follows:

Warning: Folder mounted to "/data" is not writable from inside container. Changing folder owner to neo4j.
Installing Plugin 'apoc' from /var/lib/neo4j/labs/apoc-*-core.jar to /var/lib/neo4j/plugins/apoc.jar
Applying default values for plugin apoc to neo4j.conf
Changed password for user 'neo4j'. IMPORTANT: this change will only take effect if performed before the database is started for the first time.
2023-12-05 05:37:27.750+0000 INFO  Starting...
2023-12-05 05:40:09.729+0000 INFO  This instance is ServerId{94dc12f2} (94dc12f2-5c28-424e-bd15-52ab91daea76)
2023-12-05 05:42:24.354+0000 INFO  ======== Neo4j 5.11.0 ========
2023-12-05 05:43:53.711+0000 INFO  Bolt enabled on 0.0.0.0:7687.
[main] INFO org.eclipse.jetty.server.Server - jetty-10.0.15; built: 2023-04-11T17:25:14.480Z; git: 68017dbd00236bb7e187330d7585a059610f661d; jvm 17.0.8.1+1
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler@58a5b377{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.session.DefaultSessionIdManager - Session workerName=node0
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@7b3b791b{/db,null,AVAILABLE}
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.eclipse.jetty.jsp.JettyJspServlet
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@7c9a5397{/browser,jar:file:/var/lib/neo4j/lib/neo4j-browser-5.11.0.jar!/browser,AVAILABLE}
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@cfcab77{/,null,AVAILABLE}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started http@6f32cfff{HTTP/1.1, (http/1.1)}{0.0.0.0:7474}
[main] INFO org.eclipse.jetty.server.Server - Started Server@2b728f15{STARTING}[10.0.15,sto=0] @403387ms
2023-12-05 05:44:00.241+0000 INFO  Remote interface available at http://localhost:7474/
2023-12-05 05:44:00.329+0000 INFO  id: C1E8BC7A33D8DA6C8188EB861D3F8CF17498734D854674F99BCC94F766724657
2023-12-05 05:44:00.343+0000 INFO  name: system
2023-12-05 05:44:00.346+0000 INFO  creationDate: 2023-12-05T05:42:53.489Z
2023-12-05 05:44:00.350+0000 INFO  Started.

where the database had this output for docker ps:

CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                    PORTS                                                      NAMES
451f2c8ae053   neo4j:5.11             "tini -g -- /startup…"   12 minutes ago   Up 11 minutes (healthy)   0.0.0.0:7474->7474/tcp, 7473/tcp, 0.0.0.0:7687->7687/tcp   genai-stack-database-1

How to use the neo4j database and vector index with imported turtle files for GenAI?

When I imported a turtle model into the neo4j database and started asking questions regarding the file, I did not get the answers that I want, even asking directly of describing a URI, it could not. In fact, the answers are worse than feeding the texts of the turtle file directly to LLM.
Model import is through n10s. Cypher queries on neo4j works fine. Cypher queries can also work at http://localhost:8505. Any other questions other than queries cannot return logical answers from the bot. Before GenAI, I also tried Ollama with Langchain to read it as text, it worked fine except the model cannot understand the relationships in the semantic web, which is why I turned to GenAI. The idea of RAG is exactly the way I want to guide LLM for domain knowledge deduction.
Maybe I am opening it in a wrong way, the turtle file is from Brick Schema.

Pulling manifest......

Every time, when I start the genai-stack using "docker compose --profile linux-gpu up", I am able to see the following stack of containers fired-up....

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a25d59ab30b3 ollama/ollama:latest "/bin/ollama serve" 8 minutes ago Up 8 minutes 11434/tcp genai-stack-llm-gpu-1
961a9c47f33f genai-stack-front-end "npm run dev" 10 minutes ago Up 54 seconds 0.0.0.0:8505->8505/tcp, :::8505->8505/tcp genai-stack-front-end-1
9f79929d20ca genai-stack-bot "streamlit run bot.p…" 10 minutes ago Up About a minute (healthy) 0.0.0.0:8501->8501/tcp, :::8501->8501/tcp genai-stack-bot-1
640e183a4b1f genai-stack-api "uvicorn api:app --h…" 10 minutes ago Up About a minute (healthy) 0.0.0.0:8504->8504/tcp, :::8504->8504/tcp genai-stack-api-1
e8c060ee60a1 genai-stack-loader "streamlit run loade…" 10 minutes ago Up About a minute (health: starting) 0.0.0.0:8502->8502/tcp, :::8502->8502/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp genai-stack-loader-1
9966fe097a20 genai-stack-pdf_bot "streamlit run pdf_b…" 10 minutes ago Up About a minute (healthy) 0.0.0.0:8503->8503/tcp, :::8503->8503/tcp genai-stack-pdf_bot-1
86e9733bb4ba genai-stack/pull-model:1.0 "bb -f pull_model.clj" 10 minutes ago Exited (0) About a minute ago genai-stack-pull-model-1
ea48c6568915 neo4j:5.11 "tini -g -- /startup…" 10 minutes ago Up 8 minutes (healthy) 0.0.0.0:7474->7474/tcp, :::7474->7474/tcp, 7473/tcp, 0.0.0.0:7687->7687/tcp, :::7687->7687/tcp genai-stack-database-1
f23d832a09e8 ollama/ollama:latest "/bin/ollama serve" 10 minutes ago Exited (0) 9 minutes ago 11434/tcp genai-stack-llm-1

But it repeats "pulling manifest" everytime.....
Am I missing something here?
Once the Pull model is completed in a running docker stack instance, should I commit any of the running docker containers?

Warm Regards............Kannan Rama

GPU support may not be enabled

"routes.go:634: Warning: GPU support may not enabled, check you have installed install GPU drivers: nvidia-smi command failed"
NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2
5.15.0-86-generic #96-Ubuntu
docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi:
NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2

genai-stack-llm-gpu-1: ggml-cuda.cu is out of memory, raises SIGABRT

When running a query through the pdf_bot container, the Ollama container throws a SIGABRT if it is built with Nvidia GPU support. The error indicates an out of memory condition.

From the streamlit web UI:

image

From the console:

genai-stack-pdf_bot-1     | 2024-01-06 20:15:25.882 Embedding: Using SentenceTransformer
genai-stack-pdf_bot-1     | 2024-01-06 20:15:25.882 LLM: Using Ollama: llama2
genai-stack-pdf_bot-1     | /usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
genai-stack-pdf_bot-1     |   warn_deprecated(
genai-stack-pdf_bot-1     | /usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
genai-stack-pdf_bot-1     |   warn_deprecated(
genai-stack-pdf_bot-1     | /usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
genai-stack-pdf_bot-1     |   warn_deprecated(
genai-stack-pdf_bot-1     | /usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
genai-stack-pdf_bot-1     |   warn_deprecated(
genai-stack-pdf_bot-1     | /usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
genai-stack-pdf_bot-1     |   warn_deprecated(
genai-stack-api-1         | INFO:     127.0.0.1:60418 - "GET / HTTP/1.1" 200 OK
genai-stack-llm-gpu-1     | 2024/01/06 20:15:28 shim_ext_server_linux.go:24: Updating PATH to /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/ollama1468013993/cuda
genai-stack-llm-gpu-1     | 2024/01/06 20:15:28 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama1468013993/cuda/libext_server.so
genai-stack-llm-gpu-1     | 2024/01/06 20:15:28 gpu.go:146: 4031 MB VRAM available, loading up to 26 cuda GPU layers out of 32
genai-stack-llm-gpu-1     | 2024/01/06 20:15:28 ext_server_common.go:143: Initializing internal llama server
genai-stack-llm-gpu-1     | ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
genai-stack-llm-gpu-1     | ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
genai-stack-llm-gpu-1     | ggml_init_cublas: found 1 CUDA devices:
genai-stack-llm-gpu-1     |   Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1
genai-stack-llm-gpu-1     | llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4096, 32000,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    2:            blk.0.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    3:            blk.0.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    4:              blk.0.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    5:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    6:              blk.0.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    7:         blk.0.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    8:              blk.0.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor    9:              blk.0.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   10:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   11:            blk.1.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   12:            blk.1.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   13:              blk.1.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   14:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   15:              blk.1.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   16:         blk.1.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   17:              blk.1.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   18:              blk.1.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   19:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   20:           blk.10.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   21:           blk.10.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   22:             blk.10.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   23:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   24:             blk.10.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   25:        blk.10.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   26:             blk.10.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   27:             blk.10.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   28:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   29:           blk.11.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   30:           blk.11.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   31:             blk.11.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   32:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   33:             blk.11.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   34:        blk.11.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   35:             blk.11.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   36:             blk.11.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   37:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   38:           blk.12.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   39:           blk.12.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   40:             blk.12.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   41:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   42:             blk.12.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   43:        blk.12.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   44:             blk.12.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   45:             blk.12.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   46:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   47:           blk.13.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   48:           blk.13.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   49:             blk.13.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   50:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   51:             blk.13.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   52:        blk.13.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   53:             blk.13.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   54:             blk.13.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   55:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   56:           blk.14.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   57:           blk.14.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   58:             blk.14.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   59:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   60:             blk.14.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   61:        blk.14.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   62:             blk.14.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   63:             blk.14.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   64:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   65:           blk.15.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   66:           blk.15.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   67:             blk.15.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   68:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   69:             blk.15.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   70:        blk.15.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   71:             blk.15.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   72:             blk.15.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   73:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   74:           blk.16.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   75:           blk.16.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   76:             blk.16.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   77:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   78:             blk.16.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   79:        blk.16.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   80:             blk.16.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   81:             blk.16.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   82:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   83:           blk.17.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   84:           blk.17.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   85:             blk.17.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   86:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   87:             blk.17.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   88:        blk.17.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   89:             blk.17.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   90:             blk.17.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   91:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   92:           blk.18.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   93:           blk.18.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   94:             blk.18.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   95:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   96:             blk.18.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   97:        blk.18.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   98:             blk.18.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor   99:             blk.18.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  100:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  101:           blk.19.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  102:           blk.19.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  103:             blk.19.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  104:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  105:             blk.19.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  106:        blk.19.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  107:             blk.19.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  108:             blk.19.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  109:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  110:            blk.2.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  111:            blk.2.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  112:              blk.2.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  113:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  114:              blk.2.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  115:         blk.2.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  116:              blk.2.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  117:              blk.2.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  118:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  119:           blk.20.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  120:           blk.20.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  121:             blk.20.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  122:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  123:             blk.20.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  124:        blk.20.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  125:             blk.20.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  126:             blk.20.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  127:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  128:           blk.21.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  129:           blk.21.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  130:             blk.21.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  131:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  132:             blk.21.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  133:        blk.21.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  134:             blk.21.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  135:             blk.21.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  136:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  137:           blk.22.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  138:           blk.22.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  139:             blk.22.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  140:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  141:             blk.22.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  142:        blk.22.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  143:             blk.22.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  144:             blk.22.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  145:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  146:           blk.23.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  147:           blk.23.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  148:             blk.23.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  149:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  150:             blk.23.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  151:        blk.23.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  152:             blk.23.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  153:             blk.23.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  154:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  155:            blk.3.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  156:            blk.3.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  157:              blk.3.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  158:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  159:              blk.3.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  160:         blk.3.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  161:              blk.3.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  162:              blk.3.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  163:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  164:            blk.4.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  165:            blk.4.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  166:              blk.4.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  167:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  168:              blk.4.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  169:         blk.4.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  170:              blk.4.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  171:              blk.4.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  172:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  173:            blk.5.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  174:            blk.5.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  175:              blk.5.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  176:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  177:              blk.5.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  178:         blk.5.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  179:              blk.5.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  180:              blk.5.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  181:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  182:            blk.6.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  183:            blk.6.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  184:              blk.6.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  185:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  186:              blk.6.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  187:         blk.6.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  188:              blk.6.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  189:              blk.6.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  190:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  191:            blk.7.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  192:            blk.7.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  193:              blk.7.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  194:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  195:              blk.7.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  196:         blk.7.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  197:              blk.7.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  198:              blk.7.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  199:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  200:            blk.8.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  201:            blk.8.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  202:              blk.8.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  203:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  204:              blk.8.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  205:         blk.8.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  206:              blk.8.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  207:              blk.8.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  208:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  209:            blk.9.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  210:            blk.9.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  211:              blk.9.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  212:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  213:              blk.9.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  214:         blk.9.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  215:              blk.9.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  216:              blk.9.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  217:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  218:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  219:           blk.24.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  220:           blk.24.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  221:             blk.24.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  222:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  223:             blk.24.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  224:        blk.24.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  225:             blk.24.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  226:             blk.24.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  227:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  228:           blk.25.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  229:           blk.25.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  230:             blk.25.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  231:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  232:             blk.25.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  233:        blk.25.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  234:             blk.25.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  235:             blk.25.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  236:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  237:           blk.26.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  238:           blk.26.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  239:             blk.26.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  240:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  241:             blk.26.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  242:        blk.26.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  243:             blk.26.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  244:             blk.26.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  245:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  246:           blk.27.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  247:           blk.27.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  248:             blk.27.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  249:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  250:             blk.27.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  251:        blk.27.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  252:             blk.27.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  253:             blk.27.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  254:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  255:           blk.28.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  256:           blk.28.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  257:             blk.28.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  258:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  259:             blk.28.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  260:        blk.28.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  261:             blk.28.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  262:             blk.28.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  263:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  264:           blk.29.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  265:           blk.29.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  266:             blk.29.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  267:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  268:             blk.29.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  269:        blk.29.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  270:             blk.29.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  271:             blk.29.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  272:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  273:           blk.30.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  274:           blk.30.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  275:             blk.30.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  276:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  277:             blk.30.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  278:        blk.30.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  279:             blk.30.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  280:             blk.30.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  281:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  282:           blk.31.ffn_down.weight q4_0     [ 11008,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  283:           blk.31.ffn_gate.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  284:             blk.31.ffn_up.weight q4_0     [  4096, 11008,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  285:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  286:             blk.31.attn_k.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  287:        blk.31.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  288:             blk.31.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  289:             blk.31.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
genai-stack-llm-gpu-1     | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
genai-stack-llm-gpu-1     | llama_model_loader: - kv   0:                       general.architecture str              = llama
genai-stack-llm-gpu-1     | llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
genai-stack-llm-gpu-1     | llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
genai-stack-llm-gpu-1     | llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
genai-stack-llm-gpu-1     | llama_model_loader: - kv   4:                          llama.block_count u32              = 32
genai-stack-llm-gpu-1     | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
genai-stack-llm-gpu-1     | llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
genai-stack-llm-gpu-1     | llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
genai-stack-llm-gpu-1     | llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
genai-stack-llm-gpu-1     | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
genai-stack-llm-gpu-1     | llama_model_loader: - kv  10:                          general.file_type u32              = 2
genai-stack-llm-gpu-1     | llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
genai-stack-llm-gpu-1     | llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
genai-stack-llm-gpu-1     | llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
genai-stack-llm-gpu-1     | llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
genai-stack-llm-gpu-1     | llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
genai-stack-llm-gpu-1     | llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
genai-stack-llm-gpu-1     | llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
genai-stack-llm-gpu-1     | llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
genai-stack-llm-gpu-1     | llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
genai-stack-llm-gpu-1     | llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
genai-stack-llm-gpu-1     | llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
genai-stack-llm-gpu-1     | llama_model_loader: - kv  22:               general.quantization_version u32              = 2
genai-stack-llm-gpu-1     | llama_model_loader: - type  f32:   65 tensors
genai-stack-llm-gpu-1     | llama_model_loader: - type q4_0:  225 tensors
genai-stack-llm-gpu-1     | llama_model_loader: - type q6_K:    1 tensors
genai-stack-llm-gpu-1     | llm_load_vocab: special tokens definition check successful ( 259/32000 ).
genai-stack-llm-gpu-1     | llm_load_print_meta: format           = GGUF V3 (latest)
genai-stack-llm-gpu-1     | llm_load_print_meta: arch             = llama
genai-stack-llm-gpu-1     | llm_load_print_meta: vocab type       = SPM
genai-stack-llm-gpu-1     | llm_load_print_meta: n_vocab          = 32000
genai-stack-llm-gpu-1     | llm_load_print_meta: n_merges         = 0
genai-stack-llm-gpu-1     | llm_load_print_meta: n_ctx_train      = 4096
genai-stack-llm-gpu-1     | llm_load_print_meta: n_embd           = 4096
genai-stack-llm-gpu-1     | llm_load_print_meta: n_head           = 32
genai-stack-llm-gpu-1     | llm_load_print_meta: n_head_kv        = 32
genai-stack-llm-gpu-1     | llm_load_print_meta: n_layer          = 32
genai-stack-llm-gpu-1     | llm_load_print_meta: n_rot            = 128
genai-stack-llm-gpu-1     | llm_load_print_meta: n_gqa            = 1
genai-stack-llm-gpu-1     | llm_load_print_meta: f_norm_eps       = 0.0e+00
genai-stack-llm-gpu-1     | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
genai-stack-llm-gpu-1     | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
genai-stack-llm-gpu-1     | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
genai-stack-llm-gpu-1     | llm_load_print_meta: n_ff             = 11008
genai-stack-llm-gpu-1     | llm_load_print_meta: n_expert         = 0
genai-stack-llm-gpu-1     | llm_load_print_meta: n_expert_used    = 0
genai-stack-llm-gpu-1     | llm_load_print_meta: rope scaling     = linear
genai-stack-llm-gpu-1     | llm_load_print_meta: freq_base_train  = 10000.0
genai-stack-llm-gpu-1     | llm_load_print_meta: freq_scale_train = 1
genai-stack-llm-gpu-1     | llm_load_print_meta: n_yarn_orig_ctx  = 4096
genai-stack-llm-gpu-1     | llm_load_print_meta: rope_finetuned   = unknown
genai-stack-llm-gpu-1     | llm_load_print_meta: model type       = 7B
genai-stack-llm-gpu-1     | llm_load_print_meta: model ftype      = Q4_0
genai-stack-llm-gpu-1     | llm_load_print_meta: model params     = 6.74 B
genai-stack-llm-gpu-1     | llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
genai-stack-llm-gpu-1     | llm_load_print_meta: general.name     = LLaMA v2
genai-stack-llm-gpu-1     | llm_load_print_meta: BOS token        = 1 '<s>'
genai-stack-llm-gpu-1     | llm_load_print_meta: EOS token        = 2 '</s>'
genai-stack-llm-gpu-1     | llm_load_print_meta: UNK token        = 0 '<unk>'
genai-stack-llm-gpu-1     | llm_load_print_meta: LF token         = 13 '<0x0A>'
genai-stack-llm-gpu-1     | llm_load_tensors: ggml ctx size =    0.11 MiB
genai-stack-llm-gpu-1     | llm_load_tensors: using CUDA for GPU acceleration
genai-stack-llm-gpu-1     | llm_load_tensors: mem required  =  824.54 MiB
genai-stack-llm-gpu-1     | llm_load_tensors: offloading 26 repeating layers to GPU
genai-stack-llm-gpu-1     | llm_load_tensors: offloaded 26/33 layers to GPU
genai-stack-llm-gpu-1     | llm_load_tensors: VRAM used: 2823.44 MiB
genai-stack-llm-gpu-1     | ..................................................................................................
genai-stack-llm-gpu-1     | llama_new_context_with_model: n_ctx      = 3072
genai-stack-llm-gpu-1     | llama_new_context_with_model: freq_base  = 10000.0
genai-stack-llm-gpu-1     | llama_new_context_with_model: freq_scale = 1
genai-stack-llm-gpu-1     | 
genai-stack-llm-gpu-1     | CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9132: out of memory
genai-stack-llm-gpu-1     | current device: 0
genai-stack-llm-gpu-1     | GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9132: !"CUDA error"
genai-stack-llm-gpu-1     | Lazy loading /tmp/ollama1468013993/cuda/libext_server.so library
genai-stack-llm-gpu-1     | SIGABRT: abort
genai-stack-llm-gpu-1     | PC=0x7f725cd779fc m=23 sigcode=18446744073709551610
genai-stack-llm-gpu-1     | signal arrived during cgo execution

Unable to pull model (timeout at 240s)

I am trying to run the stack on a Ubuntu remote machine.
I get a timeout error from pull-model. I failed to find a relevant issue here.
Launching the stack through start.sh.

genai-stack-pull-model-1  | ... pulling model (10s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (20s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (30s) - will take several minutes
[...]
genai-stack-pull-model-1  | ... pulling model (230s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (240s) - will take several minutes
genai-stack-pull-model-1  | Error: Head "http://192.168.100.166:11434/": dial tcp 192.168.100.166:11434: connect: connection timed out
genai-stack-pull-model-1 exited with code 1

Current .env:

OLLAMA_BASE_URL=192.168.100.166:11434
NEO4J_URI=neo4j://192.168.100.166:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password
LLM=llama2 #or any Ollama model tag, or gpt-4 or gpt-3.5
EMBEDDING_MODEL=sentence_transformer #or openai or ollama

Current docker-compose:

services:

  llm:
    image: ollama/ollama:latest
    profiles: ["linux"]
    networks:
      - net
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

  pull-model:
    image: genai-stack/pull-model:latest
    build:
      dockerfile: pull_model.Dockerfile
    environment:
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
      - LLM=${LLM-llama2}
    networks:
      - net

  database:
    image: neo4j:5.11
    ports:
      - 7687:7687
      - 7474:7474
    volumes:
      - $PWD/data2:/data
    environment:
      - NEO4J_AUTH=${NEO4J_USERNAME-neo4j}/${NEO4J_PASSWORD-password}
      - NEO4J_PLUGINS=["apoc"]
      - NEO4J_db_tx__log_rotation_retention__policy=false
    healthcheck:
        test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
        interval: 5s
        timeout: 3s
        retries: 5
    networks:
      - net

  loader:
    build:
      dockerfile: loader.Dockerfile
    volumes:
      - $PWD/embedding_model:/embedding_model
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY-}
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-sentence_transformer}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-"https://api.smith.langchain.com"}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-false}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
    networks:
      - net
    depends_on:
      database:
        condition: service_healthy
      pull-model:
        condition: service_completed_successfully
    x-develop:
      watch:
        - action: rebuild
          path: .
          ignore:
            - bot.py
            - pdf_bot.py
            - api.py
            - front-end/
    ports:
      - 8081:8080
      - 8502:8502


  bot:
    build:
      dockerfile: bot.Dockerfile
    volumes:
      - $PWD/embedding_model:/embedding_model
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY-}
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
      - LLM=${LLM-llama2}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-sentence_transformer}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-"https://api.smith.langchain.com"}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-false}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
    networks:
      - net
    depends_on:
      database:
        condition: service_healthy
      pull-model:
        condition: service_completed_successfully
    x-develop:
      watch:
        - action: rebuild
          path: .
          ignore:
            - loader.py
            - pdf_bot.py
            - api.py
            - front-end/
    ports:
      - 8501:8501

  pdf_bot:
    build:
      dockerfile: pdf_bot.Dockerfile
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY-}
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
      - LLM=${LLM-llama2}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-sentence_transformer}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-"https://api.smith.langchain.com"}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-false}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
    networks:
      - net
    depends_on:
      database:
        condition: service_healthy
      pull-model:
        condition: service_completed_successfully
    x-develop:
      watch:
        - action: rebuild
          path: .
          ignore:
            - loader.py
            - bot.py
            - api.py
            - front-end/
    ports:
      - 8503:8503

  api:
    build:
      dockerfile: api.Dockerfile
    volumes:
      - $PWD/embedding_model:/embedding_model
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
      - LLM=${LLM-llama2}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-sentence_transformer}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-"https://api.smith.langchain.com"}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-false}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
    networks:
      - net
    depends_on:
      database:
        condition: service_healthy
      pull-model:
        condition: service_completed_successfully
    x-develop:
      watch:
        - action: rebuild
          path: .
          ignore:
            - loader.py
            - bot.py
            - pdf_bot.py
            - front-end/
    ports:
      - 8504:8504

  front-end:
    build:
      dockerfile: front-end.Dockerfile
    x-develop:
      watch:
        - action: sync
          path: ./front-end
          target: /app
          ignore:
            - ./front-end/node_modules/
        - action: rebuild
          path: ./front-end/package.json
    depends_on:
      api:
        condition: service_healthy
    networks:
      - net
    ports:
      - 8505:8505

networks:
  net:

Changing OLLAMA_BASE_URL=http:llm:11434 somehow leads to

genai-stack-pull-model-1  | pulling ollama model llama2 using http://llm:11434
genai-stack-pull-model-1  | Error: Head "http://llm:11434/": dial tcp: lookup llm on 127.0.0.11:53: server misbehaving
genai-stack-pull-model-1 exited with code 1

with no pull at all.

Feature: Adding contributors section to the README.md file.

There is no Contributors section in readme file .
As we know Contributions are what make the open-source community such an amazing place to learn, inspire, and create.
The Contributors section in a README.md file is important as it acknowledges and gives credit to those who have contributed to a project, fosters community and collaboration, adds transparency and accountability, and helps document the project's history for current and future maintainers. It also serves as a form of recognition, motivating contributors to continue their efforts.
contributors

No container to killdependency failed to start: container genai-stack-database-1 exited (1)

[+] Running 2/0
✔ Container genai-stack-database-1 Created 0.0s
✔ Container genai-stack-pull-model-1 Created 0.0s
Attaching to api-1, bot-1, database-1, front-end-1, loader-1, pdf_bot-1, pull-model-1
pull-model-1 | pulling ollama model llama2 using http://host.docker.internal:11434
database-1 | Installing Plugin 'apoc' from /var/lib/neo4j/labs/apoc--core.jar to /var/lib/neo4j/plugins/apoc.jar
database-1 | Applying default values for plugin apoc to neo4j.conf
database-1 | Skipping dbms.security.procedures.unrestricted for plugin apoc because it is already set.
database-1 | You may need to add apoc.
to the dbms.security.procedures.unrestricted setting in your configuration file.
pulling manifest
pull-model-1 | pulling 8934d96d3f08... 100% ▕▏ 3.8 GB
pulling manifest
pull-model-1 | pulling 8934d96d3f08... 100% ▕▏ 3.8 GB
pulling manifest
pulling manifest
pull-model-1 | pulling 8934d96d3f08... 100% ▕▏ 3.8 GB
pull-model-1 | pulling 8c17c2ebb0ea... 100% ▕▏ 7.0 KB
pull-model-1 | pulling 7c23fb36d801... 100% ▕▏ 4.8 KB
pull-model-1 | pulling 2e0493f67d0c... 100% ▕▏ 59 B
pull-model-1 | pulling fa304d675061... 100% ▕▏ 91 B
pull-model-1 | pulling 42ba7f8a01dd... 100% ▕▏ 557 B
pull-model-1 | verifying sha256 digest
pull-model-1 | writing manifest
pull-model-1 | removing any unused layers
pull-model-1 | success
pull-model-1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
no container to killdependency failed to start: container genai-stack-database-1 exited (1)

Tried running docker compose up multiple times but it is failing to start genai-stack-database-1

--profile linux-gpu CANNOT pull-model

I get errors like these:

......
genai-stack-pull-model-1  | Error: Head "http://llm-gpu:11434/": dial tcp  <my ip>:11434: connect: no route to host
genai-stack-pull-model-1 exited with code 1
......
service "pull-model" didn't complete successfully: exit 1

I can run this using --profile linux, but it just too slow. But I can not use my gpu!

Unable to see graph database

Hello, first of all thanks for the great work you guys have put in.

I am actaully facing an issue when starting the graph database with "Go to http://localhost:7474/ to explore the graph" when I go to http://localhost:7474/ I tried to login with URL - neo4j://database:7687, Username - neo4j and password - password. But I am unable to connect this way. Can you please help me to solve this issue.

FYI below is my .env file:

LLM=llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2
EMBEDDING_MODEL=sentence_transformer #or openai, ollama, or aws
NEO4J_URI=neo4j://database:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password
OLLAMA_BASE_URL=http://llm:11434

Thanks in advance!
Anshul pagariya

ValueError: Could not connect to Neo4j database.

ValueError: Could not connect to Neo4j database. Please ensure that the url is correct
Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 541, in _run_script
exec(code, module.dict)
File "/app/bot.py", line 38, in
neo4j_graph = Neo4jGraph(url=url, username=username, password=password)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/graphs/neo4j_graph.py", line 54, in init
raise ValueError(

OPENAI_API_KEY="sk-*************"

OLLAMA_BASE_URL="http://localhost:11434"
NEO4J_URI="neo4j://localhost:7687"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="Fq3uHU@NeNy"
LLM="gpt-3.5"
EMBEDDING_MODEL="openai"

LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_TRACING_V2=false
LANGCHAIN_PROJECT=#your-project-name
LANGCHAIN_API_KEY=#your-api-key ls_...

Please help

Unable to pull or build the pull-model image

I am getting Service pull-model has neither an image nor a build context specified and trying to change the compose file context and even trying to build the image directly with docker build -f results in errors.

Encountered a ConnectionErrorr: Max retries exceeded

ConnectionError: HTTPConnectionPool(host='llm', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8f44052160>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

Is it possible to parameters to the vector retrieval query ?

Hi,

I was wondering if it is possible to pass some parameters to the vector index retrieval query (similar to the one in chains.py:142). The value of these parameters depends on user input. I want to retrieve different sets of indexes depending on the user input.

In my query, I have the following match clause.

MATCH (question) -[:SITE]-> (s:Site {name: $site_name})

$site_name will depend on user input

PDF bot error uploaded pdf file but cannot chat due to below error

pdf_bot-1 | 2024-01-04 06:36:05.583 Embedding: Using SentenceTransformer
pdf_bot-1 | 2024-01-04 06:36:05.583 LLM: Using Ollama: llama2
api-1 | INFO: 127.0.0.1:44452 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:44454 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:48628 - "GET / HTTP/1.1" 200 OK
pdf_bot-1 | 2024-01-04 06:36:21.249 Uncaught app exception
pdf_bot-1 | Traceback (most recent call last):
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
pdf_bot-1 | exec(code, module.dict)
pdf_bot-1 | File "/app/pdf_bot.py", line 95, in
pdf_bot-1 | main()
pdf_bot-1 | File "/app/pdf_bot.py", line 91, in main
pdf_bot-1 | qa.run(query, callbacks=[stream_handler])
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 507, in run
pdf_bot-1 | return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in call
pdf_bot-1 | raise e
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in call
pdf_bot-1 | self._call(inputs, run_manager=run_manager)
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 144, in _call
pdf_bot-1 | answer = self.combine_documents_chain.run(
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 512, in run
pdf_bot-1 | return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in call
pdf_bot-1 | raise e
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in call
pdf_bot-1 | self._call(inputs, run_manager=run_manager)
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 136, in _call
pdf_bot-1 | output, extra_return_dict = self.combine_docs(
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
pdf_bot-1 | return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
pdf_bot-1 | return self(kwargs, callbacks=callbacks)[self.output_key]
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in call
pdf_bot-1 | raise e
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in call
pdf_bot-1 | self._call(inputs, run_manager=run_manager)
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
pdf_bot-1 | response = self.generate([inputs], run_manager=run_manager)
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
pdf_bot-1 | return self.llm.generate_prompt(
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 496, in generate_prompt
pdf_bot-1 | return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 383, in generate
pdf_bot-1 | raise e
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 373, in generate
pdf_bot-1 | self._generate_with_cache(
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 529, in _generate_with_cache
pdf_bot-1 | return self._generate(
pdf_bot-1 | ^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 209, in _generate
pdf_bot-1 | final_chunk = self._chat_stream_with_aggregation(
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 168, in _chat_stream_with_aggregation
pdf_bot-1 | for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 155, in _create_chat_stream
pdf_bot-1 | yield from self._create_stream(
pdf_bot-1 | ^^^^^^^^^^^^^^^^^^^^
pdf_bot-1 | File "/usr/local/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 203, in _create_stream
pdf_bot-1 | raise ValueError(
pdf_bot-1 | ValueError: Ollama call failed with status code 500. Details: llama runner process has terminated
api-1 | INFO: 127.0.0.1:48632 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:52504 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:52510 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:38514 - "GET / HTTP/1.1" 200 OK
api-1 | INFO: 127.0.0.1:38516 - "GET / HTTP/1.1" 200 OK

dependency failed to start: container genai-stack-database-1 is unhealthy

"genai-stack-pull-model-1 exited with code 0
dependency failed to start: container genai-stack-database-1 is unhealthy"

docker ps:
...:::7687->7687/tcp genai-stack-database-1 --- so this is started
11434/tcp genai-stack-llm-1
Browser on port 7474:
"ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason is not available."
Seems that the neo4j password is not set from the .env file .
Default password also not working.

Error in building docker image

I am trying to run the docker compose up command after cloning the git repo, to build the docker container however I am facing following error:

=> ERROR [api 3/8] RUN apt-get update && apt-get install -y build-es 0.5s

0.491 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
0.491 At least one invalid signature was encountered.
0.495 Reading package lists...
0.503 W: GPG error: http://deb.debian.org/debian bookworm InRelease: At least one invalid signature was encountered.
0.503 E: The repository 'http://deb.debian.org/debian bookworm InRelease' is not signed.

failed to solve: process "/bin/sh -c apt-get update && apt-get install -y build-essential curl software-properties-common && rm -rf /var/lib/apt/lists/*" did not complete successfully: exit code: 100

Can someone help with this please

Unable to locate package software-properties-common

=> ERROR [genai-stack_loader 3/9] RUN apt-get update && apt-get install -y     build-essential     curl     software-properties-common     && rm -rf /var/lib/apt/lists/*       37.7s
------
 > [genai-stack_loader 3/9] RUN apt-get update && apt-get install -y     build-essential     curl     software-properties-common     && rm -rf /var/lib/apt/lists/*:
#0 30.55 Ign:1 http://deb.debian.org/debian bookworm InRelease
#0 30.55 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
#0 30.55 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
#0 31.55 Ign:1 http://deb.debian.org/debian bookworm InRelease
#0 31.55 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
#0 31.55 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
#0 33.55 Ign:1 http://deb.debian.org/debian bookworm InRelease
#0 33.55 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
#0 33.55 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
#0 37.56 Err:1 http://deb.debian.org/debian bookworm InRelease
#0 37.56   Could not connect to debian.map.fastlydns.net:80 (199.232.18.132), connection timed out Unable to connect to deb.debian.org:http:
#0 37.56 Err:2 http://deb.debian.org/debian bookworm-updates InRelease
#0 37.56   Unable to connect to deb.debian.org:http:
#0 37.56 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
#0 37.56   Unable to connect to deb.debian.org:http:
#0 37.56 Reading package lists...
#0 37.57 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm/InRelease  Could not connect to debian.map.fastlydns.net:80 (199.232.18.132), connection timed out Unable to connect to deb.debian.org:http:
#0 37.57 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm-updates/InRelease  Unable to connect to deb.debian.org:http:
#0 37.57 W: Failed to fetch http://deb.debian.org/debian-security/dists/bookworm-security/InRelease  Unable to connect to deb.debian.org:http:
#0 37.57 W: Some index files failed to download. They have been ignored, or old ones used instead.
#0 37.58 Reading package lists...
#0 37.59 Building dependency tree...
#0 37.59 Reading state information...
#0 37.59 Package build-essential is not available, but is referred to by another package.
#0 37.59 This may mean that the package is missing, has been obsoleted, or
#0 37.59 is only available from another source
#0 37.59 
#0 37.59 E: Package 'build-essential' has no installation candidate
#0 37.59 E: Unable to locate package software-properties-common
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get install -y     build-essential     curl     software-properties-common     && rm -rf /var/lib/apt/lists/*]: exit code: 100

I am getting this error. What is the problem?

I tried to set proxy ENV at beginning of Dockerfile, but it didn't help.
I tried to run official ubuntu image and install tools from RUN command and it worked.

CentOS 7
Docker version 20.10.17, build 100c701
docker-compose version 1.18.0, build 8dd22a9

Windows Docker compose fail

66.69 running build_ext
66.69 creating /tmp/pip-install-4kvjs2nn/pyarrow_b6d841f989b243648cf4bb6db21f3654/build/temp.linux-x86_64-cpython-312
66.69 -- Running cmake for PyArrow
66.69 cmake -DCMAKE_INSTALL_PREFIX=/tmp/pip-install-4kvjs2nn/pyarrow_b6d841f989b243648cf4bb6db21f3654/build/lib.linux-x86_64-cpython-312/pyarrow -DPYTHON_EXECUTABLE=/usr/local/bin/python -DPython3_EXECUTABLE=/usr/local/bin/python -DPYARROW_CXXFLAGS= -DPYARROW_BUILD_CUDA=off -DPYARROW_BUILD_SUBSTRAIT=off -DPYARROW_BUILD_FLIGHT=off -DPYARROW_BUILD_GANDIVA=off -DPYARROW_BUILD_ACERO=off -DPYARROW_BUILD_DATASET=off -DPYARROW_BUILD_ORC=off -DPYARROW_BUILD_PARQUET=off -DPYARROW_BUILD_PARQUET_ENCRYPTION=off -DPYARROW_BUILD_GCS=off -DPYARROW_BUILD_S3=off -DPYARROW_BUILD_HDFS=off -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_CYTHON_CPP=off -DPYARROW_GENERATE_COVERAGE=off -DCMAKE_BUILD_TYPE=release /tmp/pip-install-4kvjs2nn/pyarrow_b6d841f989b243648cf4bb6db21f3654
66.69 error: command 'cmake' failed: No such file or directory
66.69 [end of output]
66.69
66.69 note: This error originates from a subprocess, and is likely not a problem with pip.
66.69 ERROR: Failed building wheel for pyarrow
66.69 Building wheel for frozenlist (pyproject.toml): started
69.88 Building wheel for frozenlist (pyproject.toml): finished with status 'done'
69.88 Created wheel for frozenlist: filename=frozenlist-1.4.0-cp312-cp312-linux_x86_64.whl size=261458 sha256=351492d50d170ae74566490427c34161ede796c1c31004f7e72a04c5423f156b
69.88 Stored in directory: /root/.cache/pip/wheels/f1/9c/94/9386cb0ea511a93226456388d41d35f1c24ba15a62ffd7b1ef
69.89 Building wheel for multidict (pyproject.toml): started
71.10 Building wheel for multidict (pyproject.toml): finished with status 'done'
71.10 Created wheel for multidict: filename=multidict-6.0.4-cp312-cp312-linux_x86_64.whl size=114931 sha256=cc4e50dc92033fadc358a641cb86afcd0f61a8dce26a2fb664636d7508ba4e97
71.10 Stored in directory: /root/.cache/pip/wheels/f6/d8/ff/3c14a64b8f2ab1aa94ba2888f5a988be6ab446ec5c8d1a82da
71.10 Building wheel for yarl (pyproject.toml): started
73.58 Building wheel for yarl (pyproject.toml): finished with status 'done'
73.58 Created wheel for yarl: filename=yarl-1.9.2-cp312-cp312-linux_x86_64.whl size=285233 sha256=67760dba357b5b7a1f84e4291fbf1e28d87602dfcef9144aa79df2b0c54e42c0
73.58 Stored in directory: /root/.cache/pip/wheels/84/e3/6a/7d0fa1abee8e4aa39922b5bd54689b4b5e4269b2821f482a32
73.59 Successfully built wikipedia neo4j frozenlist multidict yarl
73.59 Failed to build tiktoken aiohttp pyarrow
73.59 ERROR: Could not build wheels for tiktoken, aiohttp, pyarrow, which is required to install pyproject.toml-based projects
73.60
73.60 [notice] A new release of pip is available: 23.2.1 -> 23.3.1
73.60 [notice] To update, run: pip install --upgrade pip

failed to solve: process "/bin/sh -c pip install --upgrade -r requirements.txt" did not complete successfully: exit code: 1
PS E:\Source Control\AI Apps\Docker\genai-stack2>

What now?

Develop: tag change in last revision.

OS Image : AWS : amazon/Deep Learning AMI GPU PyTorch 2.0.1 (Ubuntu 20.04) 20231003

Upon running docker compose up the following error happens in the command line:
Additional Property is not allowed Develop

I reverted back to the prior commit which had the develop lines as x-develop and the docker command compose up worked as intended on this image.

Failed to verify certificate: x509: certificate signed by unknown authority

I am using VSCode WSL2, Ubuntu 22.04 and Docker Engine v24.0.6
The .env file contains:

LLM=mistral #or any llama2:7b Ollama model tag, gpt-4, gpt-3.5, or claudev2
EMBEDDING_MODEL=sentence_transformer #or openai, ollama, or aws
OLLAMA_BASE_URL=http://llm:11434

Executing this command:
docker compose --profile linux up --build
Giving me these lines:

Attaching to genai-stack-api-1, genai-stack-bot-1, genai-stack-database-1, genai-stack-front-end-1, genai-stack-llm-1, genai-stack-loader-1, genai-stack-pdf_bot-1, genai-stack-pull-model-1
genai-stack-pull-model-1 | pulling ollama model mistral using http://llm:11434
genai-stack-llm-1 | [GIN] 2023/11/02 - 10:47:24 | 200 | 57.058µs | 172.18.0.2 | HEAD "/"
genai-stack-pull-model-1 | pulling manifest
genai-stack-llm-1 | 2023/11/02 10:47:26 images.go:1164: couldn't get manifest: Get "https://registry.ollama.ai/v2/library/mistral/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
genai-stack-llm-1 | [GIN] 2023/11/02 - 10:47:26 | 200 | 1.672986504s | 172.18.0.2 | POST "/api/pull"
genai-stack-pull-model-1 | Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mistral/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
service "pull-model" didn't complete successfully: exit 1

Please advice on how to fix the error. Thx much.

Can't connect to Ollama on Windows

Using the GenAI stack from Docker and having built my Ollama on Windows, I tried to run the stack and I have this message

genai-stack-pull-model-1  | pulling ollama model llama2 using http://localhost:11434
genai-stack-pull-model-1  | Error: could not connect to ollama server, run 'ollama serve' to start it

But my ollama is running, I can use it in command line, I can pull llama2 in command line... then all seems OK on the Ollama side (except it's Windows and not really supported (yet) by Ollama)
2023/11/01 17:38:54 routes.go:678: Listening on 127.0.0.1:11434 (version 0.0.0)

No returns from requests

Everything seems to start smoothly, but I don't get any returns wehen I use the different apps. It says for instance "Model: llama2:13b
RAG: Disabled" but no further response when I ask "How can I calculate age from date of birth in Cypher?" what could be the issue here?

Problems with docker compose up on Ubuntu

1. Downloading torch-2.0.1-cp311-cp311-manylinux1_x86_64.whl (619.9 MB) takes hours at downloading speed 1 Mb/s
It works, just takes a long time.
2. docker compose up has permission issues, need sudo docker compose up
After running sudo usermod -aG docker $USER and sudo reboot, it worked.
3. service "pull-model" didn't complete successfully
worked with #76, but downloading really takes ages and fails a lot.

genai-stack-pull-model-1  | pulling ollama model zephyr using http://llm:11434
genai-stack-pull-model-1  | pulling manifest
genai-stack-pull-model-1  | ... pulling model (0s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (10s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (20s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (30s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (40s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (50s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (60s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (70s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (80s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (90s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (100s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (110s) - will take several minutes
genai-stack-pull-model-1  | ... pulling model (120s) - will take several minutes
pulling 0e655574a746...   1% |         | (74 MB/4.1 GB, 1.4 MB/s) [1m5s:48m55s]Error: max retries exceeded
genai-stack-pull-model-1 exited with code 1
service "pull-model" didn't complete successfully: exit 1

4. front-end/ gives no answer
After pull model finished, returning answers worked.

ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f9e20013450>: Failed to resolve 'host.docker.internal' ([Errno -2] Name or service not known)"))

Ollama has already been installed locally. Not sure if that is the cause.

Correct gpt 3.5 configuration needed

Please advise the correct configuration for gpt 3.5 usage as I keep getting messages related to Ollama and python related crash outputs when I deploy the bots. gpt 3.5 API key is in the .env file.
The only web that starts correctly is the pdf one but when I upload a file it crashes too. I can provide evidence a little bit later.

bot 3/8 error

37.39 Ign:1 http://deb.debian.org/debian bookworm InRelease
74.25 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
111.1 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
148.0 Ign:1 http://deb.debian.org/debian bookworm InRelease
184.8 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
221.7 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
258.6 Ign:1 http://deb.debian.org/debian bookworm InRelease
295.4 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
332.3 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
369.2 Err:1 http://deb.debian.org/debian bookworm InRelease
369.2 Temporary failure resolving 'deb.debian.org'
406.0 Err:2 http://deb.debian.org/debian bookworm-updates InRelease
406.0 Temporary failure resolving 'deb.debian.org'
442.9 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
442.9 Temporary failure resolving 'deb.debian.org'
442.9 Reading package lists...
442.9 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm/InRelease Temporary failure resolving 'deb.debian.org'
442.9 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm-updates/InRelease Temporary failure resolving 'deb.debian.org'
442.9 W: Failed to fetch http://deb.debian.org/debian-security/dists/bookworm-security/InRelease Temporary failure resolving 'deb.debian.org'
442.9 W: Some index files failed to download. They have been ignored, or old ones used instead.
442.9 Reading package lists...
443.0 Building dependency tree...
443.0 Reading state information...
443.0 Package build-essential is not available, but is referred to by another package.
443.0 This may mean that the package is missing, has been obsoleted, or
443.0 is only available from another source
443.0
443.0 E: Package 'build-essential' has no installation candidate
443.0 E: Unable to locate package software-properties-common

failed to solve: process "/bin/sh -c apt-get update && apt-get install -y build-essential curl software-properties-common && rm -rf /var/lib/apt/lists/*" did not complete successfully: exit code: 100

ConnectionError

I get the following error after starting the application and typing any query

ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate/ (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f1a50107ed0>: Failed to resolve 'host.docker.internal' ([Errno -2] Name or service not known)"))

Pulling Norwegian models directly from Hugging Face into GenAI stack

The number of models available from Ollama.ai at https://ollama.ai/library is quite limited compared to Hugging Face at https://huggingface.co/models , but more importantly, we need Norwegian language models, and there are no Norwegian models available from Ollama, while there a number of Norwegian models available from Hugging Face.

Pulling models directly from Hugging Face into the GenAI stack in addition to Ollama would be a nice feature enhancement.

Support local Ollama model created from Modelfile

As a developer, I can build some local Ollama model for testing. But it cannot be used by Genai Stack directly.

Currently pull_model.Dockerfile will invoke ollama pull command to pull a model from a registry.

        (process/shell {:env {"OLLAMA_HOST" url} :out :inherit :err :inherit} (format "./bin/ollama pull %s" llm))

I think the proper way is to pull model if the model does not exist locally. The change should be simple, e.g.

        (process/shell {:env {"OLLAMA_HOST" url} :out :inherit :err :inherit} (format "bash -c './bin/ollama show %s --modelfile > /dev/null || ./bin/ollama pull %s'" llm llm))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.