This is just a learning example using a local AI to generate Cypher queries against a graph database.
- Run
docker compose build
- Run
docker compose up -d
- Open ollama web ui: http://localhost:3002
- Click create an account and set values
- Click on
settings > models > pull a model from ollama.com
- Enter
llama2
and press download - Open web page: http://localhost:8080
- Enter a question. There's an expandable section to show example queries
You can also access the Neo4J web UI at: http://localhost:7474 if you want to run queries directly
I don't always get correct results. Sometimes it will generate correct queries, and sometimes it doesn't. Sometimes it will generate a good query, then a bad one when re-runinng the test. I've tried adding some extra examples to the prompt template, but it's causing it to blow up running any query at all. This probably requires better training, or fine tuning to get better results, or maybe some reinforced learning.
- Try using this LLM model instead: https://huggingface.co/monsterapi/llama2-code-generation
- I think we can replace the langchain LLMChain with a huggingface pipeline: https://stackoverflow.com/questions/77152888/huggingfacepipeline-and-langchain
- Use a different way to download the LLM file instead of requiring using ollama-ui
- Use a more complex database example
- Try switching from Neo4J to Postgres with Apache AGE