docs's Issues
Issue on integrating chat resume.
Hi All,
I am sqlite as data persistence layer and Microsoft Authentication. I want to fix chat resume feature for my application. Can anyone help me with that.
Here is my code.
@cl.on_chat_resume
async def on_chat_resume(thread: cl_data.ThreadDict):
memory = []
root_messages = [m for m in thread["steps"] if m["parentId"] == 'None']
for message in root_messages:
if message["type"] == "user_message":
memory.append({"role": "user", "content": message["output"]})
else:
memory.append({"role": "assistant", "content": message["output"]})
cl.user_session.set("memory", memory)
await cl.Message(memory).send()![Uploading Screenshot from 2024-04-02 17-51-59.png…]()
Issue on docs
I’m sorry, I made this issue by mistake…
Minor enhancement of the LangChain integration docs
Path: /integrations/langchain
This is a suggestion for a minor improvement of the LangChain integration docs.
Environment
- Python v3.11.6
- chainlit v0.7.603
- MacOS Sonoma v14.0
- VS Code Pylance extension v2023.11.10, powered by the static type checker Pyright
Problem
The below code, generates an inconvenient warning from Pylance in VS Code.
Code:
runnable = cl.user_session.get("runnable") # type: Runnable
Warning:
Expression of type "Unknown | None" cannot be assigned to declared type "Runnable[Unknown, Unknown]"
Type "Unknown | None" cannot be assigned to type "Runnable[Unknown, Unknown]"
Type "None" cannot be assigned to type "Runnable[Unknown, Unknown]" Pylance(reportGeneralTypeIssues)
Pylance suggests that the code cannot guarantee that the .get("runnable")
call returns a Runnable.
This is an inconvenience when copy-pasting to get started with chainlit
.
Solution
I suggest you change the docs to the following, to inform the type checker that we will always receive a value with the type Runnable:
from typing import cast
# (..)
runnable = cast(Runnable, cl.user_session.get("runnable"))
# (...)
Would make a PR directly, but the repo seems to require me to create a fork, so I'm taking the lazy route and just raising an issue ;-)
Potential Limitations
-
typing.cast
was introduced in Python 3.5 together with thetyping
module, so if you only support Python v3.5+, this change should work for everyone. If it's correct thatchainlit
currently supports >=3.8.1 and <3.12, the change should be fine. -
Have not tested this with any other type checker.
Issue on docs
Path: /authentication/password
cl.AppUser
should be transitioned to the new cl.User
I am unable to run chainlit on my system
I am unable to run chainlit on my system.
When I run chainlit run app.py -w it says - 2023-09-27 19:29:08 - Your app is available at http://localhost:8000
That time http://localhost:8000/ says - Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
Issue on docs
Path: /examples/qa
Chainlit version 0.6.2
https://docs.chainlit.io/examples/qa
Error on UI screen: Message.init() got an unexpected keyword argument 'disable_human_feedback'
\site-packages\chainlit\utils.py", line 36, in wrapper
return await user_function(**params_values)
how to display markdown math equation?
Path: /api-reference/elements/text
Issue on docs
Path: /examples/mrkl
The link at the top of the page to LangChain MRKL is not working.
Example inspired from the [LangChain doc](https://python.langchain.com/en/latest/modules/agents/agents/examples/mrkl_chat.html)
Maybe they moved to this page?
https://python.langchain.com/docs/modules/agents/how_to/custom_mrkl_agent
I can fix it for you if you would like.
Copilot instructions don't match cookbook
The script snippet at https://docs.chainlit.io/deployment/copilot needs to be updated to include this listener, which appears to the correct way to do this in the cookbook and the community.
window.addEventListener("chainlit-call-fn", (e) => {
const { name, args, callback } = e.detail;
if (name === "test") {
callback("You sent: " + args.msg);
}
});
Should solve Chainlit/chainlit#929
not a valid model id
When I type something on the localhost board and send it, three messages are returned:
LLMChain: not a valid model id
Chatbot: not a valid model id
Error: not a valid model id
I have changed the environment variable : export GPT_MODEL="gpt-3.5-turbo"
Is it correct? Please help me.
Chainlit deployment for multiple worker in production
Hi Team,
Thanks a lot for the wonderful work. I know more documentation will be available in future, but here is some questions I have if you can help me.
Are there any ways to use multiple worker while deploying in production?
Are there any default setting of number of worker when we run 'chainlit run myapp.py'
Where does the session stored? (In case I want to run with multiple workers, can I assume that for a particular session it can always find its session variable ? in cloud the assigned worker for a request can be changed for a single session unless session affinity is in place)
Need voice input code for chainlit
There is only code for config file but there is no code for voice input for chainlit so please provide the code for voice input in chainlit
local Data Persistence
When I try to run local Data Persistence, it gave me the following error
2023-08-19 02:31:01 - Your app is available at http://localhost:8000
2023-08-19 02:31:02 - HTTP Request: GET http://localhost:63750/status "HTTP/1.1 502 Bad Gateway"
ERROR: Traceback (most recent call last)
Why is the port for Prisma so off?
Issue on docs - You need to configure at least an on_chat_start or an on_message callback
Path: /integrations/langchain
I installed langchain + chainlit, set my .env with my openAI key, and copied the async LCEL code identically to my app.py.
When I run "chainlit run app.py -w" it shows:
In terminal, it shows: You need to configure at least an on_chat_start or an on_message callback
but it's clearly defined in the default code.
Any sense what I could have done wrong here?
Issue on docs
Path: /integrations/langflow
I don't think this example works.. i get this error:
2023-09-13 11:41:13 - load_flow_from_json() got an unexpected keyword argument 'flow'
Traceback (most recent call last):
File "/Users/simonau/Documents/Lending/env/lib/python3.11/site-packages/chainlit/utils.py", line 40, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simonau/Documents/Lending/main_tool_langflow.py", line 15, in start
flow = await load_flow(schema=schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simonau/Documents/Lending/env/lib/python3.11/site-packages/chainlit/langflow/init.py", line 30, in load_flow
flow = load_flow_from_json(flow=schema, tweaks=tweaks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: load_flow_from_json() got an unexpected keyword argument 'flow'
2023-09-13 11:41:13 - load_flow_from_json() got an unexpected keyword argument 'flow'
Issue on docs
Path: /examples/openai-sql
Missing dependencies in docs
Path: /advanced-features/prompt-playground/setup
The code on this page does not work until an LLM provider is configured, which is only mentioned 2 pages later here. I suggest either mentioning the dependency or changing the order of pages.
Issue on docs: Human Feedback not working
Path: /data-persistence/feedback
Exception: [{'message': 'Unknown type "FeedbackPayloadInput". Did you mean "ThreadPayloadInput", "GenerationPayloadInput", or "ScorePayloadInput"?', 'locations': [{'line': 14, 'column': 22}]}, {'message': 'Unknown argument "feedback" on field "Mutation.ingestStep".', 'locations': [{'line': 31, 'column': 9}]}]
2024-04-29 12:46:12 - HTTP Request: POST https://cloud.getliteral.ai/api/graphql "HTTP/1.1 200 OK"
2024-04-29 12:46:12 - Failed to create feedback: [{'message': 'Unknown type "FeedbackStrategy".', 'locations': [{'line': 5, 'column': 24}]}, {'message': 'Cannot query field
"createFeedback" on type "Mutation".', 'locations': [{'line': 8, 'column': 13}]}]
Issue on docs
Path: /chat-experience/loader
Hello,
I don't know if it's voluntary,
But it should be added that if, in config.toml
, we have hide_cot
like this:
# Hide the chain of thought details from the user in the UI.
hide_cot = true
Then the loader won't be displayed.
Dangling section in docs
Path: /backend/config/project
This is the first of 3 sections under Config, and I'd expect to see it in the sidebar under Backend/Config right after Overview, but it doesn't show up in the sidebar. It also doesn't contain navigation links to previous/next.
How to collect data from human feedback
Issue on docs
'dict' object has no attribute 'to_openai'
Issue on docs
Path: /integrations/llama-index
The llama_index has changed. i've been able to make up for it with llama_index.core, but the import of LLMPredictor causes an import error .
Can you help ?
Thanks a lot
Issue on docs
[UI]
# Name of the app and chatbot.
name = "ensu"
custom_css = '/public/stylesheet.css'
it is not working
Testing & Debugging page ambiguous about usage
Path: /advanced-features/test-debug
The following code snippet is shared on this documentation page re Testing & Debugging:
if __name__ == "__main__":
from chainlit.cli import run_chainlit
run_chainlit(__file__)
This snippet appears to simply launch a Chainlit application.
For headless testing in automation we need to be able and execute tests without a Chainlit application, but with context, etc.
Issue on docs
Path: /api-reference/input-widgets/select
Error reloading module: No module named 'chainlit.input_widget'
When using the example code.
Loader not working with cl.sleep
I was following the tutorial to display the loader (running) while awaiting for the response, but it is not working.
I just copy the code from the documentation.
import chainlit as cl
@cl.on_chat_start
async def main():
msg = cl.Message(content="Hello!")
await msg.send()
await cl.sleep(2)
msg.content = "Hello again!"
await msg.update()
It appears Hello, after a few seconds appears Hello Again, but the loader is not shown
setting hide_cot to true still shows chain of thoughts
Path: /observability-iteration/chain-of-thought
Issue on docs
Path: /api-reference/message
This stream is not maintaining format of original content passed by token_list. It should stream in same format as original response.
sample code for react custom frontend with document attachment facility
Hi Team,
are there any sample code for chainlit and react together to handle document attachment from custom react frontend screen and chainlit as backend. The example works fine when chainlit as frontend, but if I want to have custom react frontend and chainlit as backend, are there any sample code for react code?. Thanks in advance
ValueError: Module should at least expose one of on_message, on_chat_start functionIssue on docs
Path: /api-reference/on-message
Add Amazon Cognito to Oauth examples
Add Cognito to Oauth examples here https://docs.chainlit.io/authentication/oauth
Issue on Callback Handler: LangChainDeprecationWarning
Path: /api-reference/integrations/langchain
This is my code:
from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader
from langchain.prompts import PromptTemplate
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import CTransformers
from langchain.chains import RetrievalQA
from langchain_community.chat_loaders.whatsapp import WhatsAppChatLoader
from langchain_openai import AzureChatOpenAI
from langchain_community.chat_loaders.base import ChatSession
from langchain_community.chat_loaders.utils import (
map_ai_messages,
merge_chat_runs,
)
import chainlit as cl
from db import handle_db_queries
DB_FAISS_PATH = 'C://Users//PRINCE CHOUDHARY//Downloads//Llama2-Medical-Chatbot-main//vectorstore//db_faiss'
custom_prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer:
"""
def set_custom_prompt():
"""
Prompt template for QA retrieval for each vectorstore
"""
prompt = PromptTemplate(template=custom_prompt_template,
input_variables=['context', 'question'])
return prompt
#Retrieval QA Chain
def retrieval_qa_chain(llm, prompt, db):
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=db.as_retriever(search_kwargs={'k': 2}),
return_source_documents=True,
chain_type_kwargs={'prompt': prompt}
)
return qa_chain
#Loading the model
def load_llm():
# Load the locally downloaded model here
llm = CTransformers(
#model = "TheBloke/Llama-2-7B-Chat-GGML",
model="C://Users//PRINCE CHOUDHARY//Downloads//Llama2-Medical-Chatbot-main//llama-2-7b-chat.ggmlv3.q8_0.bin",
model_type="llama",
max_new_tokens = 512,
temperature = 0.5
)
return llm
#QA Model Function
def qa_bot():
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
db = FAISS.load_local(DB_FAISS_PATH, embeddings, allow_dangerous_deserialization=True)
llm = load_llm()
qa_prompt = set_custom_prompt()
qa = retrieval_qa_chain(llm, qa_prompt, db)
return qa
#output function
def final_result(query):
qa_result = qa_bot()
response = qa_result({'query': query})
return response
#chainlit code
@cl.on_chat_start
async def start():
chain = qa_bot()
if chain is None:
print("Failed to initialize chain object")
else:
msg = cl.Message(content="Starting the bot...")
await msg.send()
msg.content = "Hi, Welcome to BotVerse. What is your query?"
await msg.update()
cl.user_session.set("chain", chain)
@cl.on_message
async def main(message: cl.Message):
chain = cl.user_session.get("chain")
cb = cl.AsyncLangchainCallbackHandler(
stream_final_answer=True, answer_prefix_tokens=["FINAL", "ANSWER"]
)
cb.answer_reached = True
if chain is None:
print("Error: The chain is not initialized.")
else:
# Determine if the message is a database query
if "Chinook DB" in message.content: # This is a simple example condition
response = handle_db_queries(message.content)
else:
# Use the existing chain for other types of queries
response = await chain.acall(message.content, callbacks=[cb])
answer = response.get("result") if isinstance(response, dict) else response
sources = response.get("source_documents") if isinstance(response, dict) else None
if sources:
answer += f"\nSources:" + str(sources)
else:
answer += "\nNo sources found"
await cl.Message(content=answer).send()
On running the application This issue is coming up:
C:\Users\PRINCE CHOUDHARY\Downloads\Llama2-Medical-Chatbot-main\venv\lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The function acall
was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use ainvoke instead.
warn_deprecated(
Can anyone help with thr issue?
Issue on docs
Path: /examples/openai-sql
I tried running this example and it seems that the Prompt Playground doesn't work.
I get the following error when clicking on the "Inspect in prompt playground" icon.
"Prompt Playground error: No LLM provider available"
Issue on docs
Path: /api-reference/lifecycle-hooks/on-message
@cl.on_chat_start works fine but , @cl.on_message is not working although I have followed the documentation.
module_path = registry[name]
I am trying to run open-interpreter from the cookbook and keep getting this error:
module_path = registry[name]
~~~~~~~~^^^^^^
KeyError: 'on_file_upload'
Do I need to make any change?
Thanks
Issue on docs
Path: /api-reference/integrations/langchain
These are outdated now: https://docs.chainlit.io/api-reference/integrations/langchain#asynchronous-callback-handler
Attempting to run the cookbook:
https://github.com/Chainlit/cookbook/blob/main/chroma-qa-chat/app.py
chainlit run app.py -w --port 3001
2023-10-02 07:37:11 - Your app is available at http://localhost:3001
2023-10-02 07:37:21 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-10-02 07:37:28 - module 'chainlit' has no attribute 'AsyncLangchainCallbackHandler'
Traceback (most recent call last):
File "/home/stgeorge/anaconda3/envs/demo/lib/python3.10/site-packages/chainlit/__init__.py", line 61, in wrapper
return await user_function(**params_values)
File "/data/cookbook/chroma-qa-chat/app.py", line 94, in main
cb = cl.AsyncLangchainCallbackHandler(
AttributeError: module 'chainlit' has no attribute 'AsyncLangchainCallbackHandler'
Please remove these docs as they are outdated and create confusion as to what is actually in the API.
Issue on docs
Path: /get-started/pure-python
Error with the command below:
$ chainlit run app.py -w
2023-12-17 22:41:45 - Your app is available at http://localhost:8000
message async handler error
Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\socketio\async_server.py", line 545, in _handle_connect
success = await self._trigger_event(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\socketio\async_server.py", line 631, in _trigger_event
ret = await handler(*args)
^^^^^^^^^^^^^^
TypeError: connect() missing 1 required positional argument: 'auth'
Error In Importing AppUser For Authentication.
Path: /authentication/password
from chainlit.types import AppUser
ImportError: cannot import name 'AppUser' from 'chainlit.types' (/opt/conda/envs/wiki/lib/python3.8/site-packages/chainlit/types.py)
Render deployment link broken
Path: /get-started/deployment
Instead of linking to a guide or template for setting up a Chainlit deployment with render.com, link goes to the Discord invite.
Issue on docs
Path: /concepts/feedback
https://docs.chainlit.io/cloud/persistence/custom link on documentation page for custom data persistence is broken,
Error reloading module: No module named 'chainlit.input_widget'
Path: /api-reference/chat-settings
test
ok
Issue on docs
Path: /examples/openai-sql
It is using version 1.0.0; however, chainlit version has been updated to 1.0.502 which does not support to_openai(). Kindly, suggest an alternative to this function for streaming responses.
"Prompt Playground error: No LLM provider available"
I get the following error when clicking on the "Inspect in prompt playground" icon.
"Prompt Playground error: No LLM provider available"
If I set the .env file(include the OPENAI_API_KEY and OPENAI_API_BASE), there is no this problem.
but I what to set it with the config ? how the set the env? Is there some example?
Saving response in variable
Path: /overview when save the response in a variable (user_reply1), im getting an ID, but i need the response provided by the user
user_reply1 = await cl.Message(content="Huh Sure, i can help you on that,can I proceed? (yes/no) ").send()
print("user_reply1", user_reply1) # for this i got "ee77a057-5b53-437b-9ecd-54d5a99b01a8", but i need the user response
if user_reply1 == "yes":
await cl.Message(content="event will be created").send()
if user_reply1 == "no":
await cl.Message(content="what do you like to know").send()
can anyone help me ?
AttributeError: 'dict' object has no attribute 'to_openai'
Path: /examples/openai-sql
When I ran the app I got this AttributeError: 'dict' object has no attribute 'to_openai'?
Question
Path: /api-reference/elements/avatar
about the avatar, is there a way to hide it?
Elements: Not showing as docs describe it
I couldn't make the text (and image) elements to display as is described in the docs on /api-reference/elements/text
The only way that seems to work is with message context and display = "inline". All the other described versions (outside of message context, and other display types, didn't show the elements at all. What am I doing wrong? Or is the doc outdated?
Being able to display text or image permanently, out of a message context, would be extremely helpful.
Thanks.
Issue on docs: how to use the Environment Variables for Public Apps
how to use the Public Apps & Environment Variables
- langchain == 0.2.0
- openai == 1.30.1
Path: /backend/env-variables
https://docs.chainlit.io/backend/env-variables
Public Apps & Environment Variables
If you want to share your app to a broader audience, you should not put your own OpenAI API keys in the .env file. Instead, you >should use user_env in the Chainlit config to ask each user to provide their own keys.
You can then access the user’s keys in your code using:user_env = cl.user_session.get("env")
Code and Python and Pkgs
I don't fully understand the above settings.
Here is my sample code:
"""Python file to serve as the frontend"""
import os
import sys
sys.path.append(os.path.abspath('.'))
from openai import AsyncOpenAI
import chainlit as cl
from dotenv import load_dotenv
load_dotenv()
# user_env = cl.user_session.get("OPENAI_API_KEY")
# os.environ["OPENAI_API_KEY"] = user_env["OPENAI_API_KEY"]
api_key = os.environ.get("OPENAI_API_KEY")
client = AsyncOpenAI(api_key=api_key)
# Instrument the OpenAI client
cl.instrument_openai()
settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
# ... more settings
}
# @cl.on_chat_start
# async def on_start():
# await cl.Message("Hello world from Caihao Cui!").send()
@cl.on_settings_update
async def on_settings_update(settings: dict):
print("Settings updated:", settings)
@cl.on_message
async def on_message(message: cl.Message):
user_env = cl.user_session.get("user_env")
api_key = user_env["OPENAI_API_KEY"]
open.api_key = api_key
# Instrument the OpenAI client
cl.instrument_openai()
response = await client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in English",
"role": "system"
},
{
"content": message.content,
"role": "user"
}
],
**settings
)
await cl.Message(content=response.choices[0].message.content).send()
if __name__ == "__main__":
from chainlit.cli import run_chainlit
run_chainlit(__file__)
moreover in the '.chainlit/config.toml' I made this change.
...
# List of environment variables to be provided by each user to use the app.
user_env = ["OPENAI_API_KEY"]
...
Environment Setting (Python 3.11.9):
chainlit == 1.1.101
langchain == 0.2.0
openai == 1.30.1
streamlit == 1.34.0
streamlit-chat == 0.1.1
Issue / Error message
2024-05-19 16:09:34 - Loaded .env file
2024-05-19 16:09:35 - Your app is available at http://localhost:8000
2024-05-19 16:09:36 - Translation file for en not found. Using default translation en-US.
2024-05-19 16:09:37 - Translated markdown file for en not found. Defaulting to chainlit.md.
2024-05-19 16:09:40 - 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "/Users/PROJECTG_ROOT/.venv/lib/python3.11/site-packages/chainlit/utils.py", line 40, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/UsersPROJECTG_ROOT/demo_app/main.py", line 41, in on_message
api_key = user_env["OPENAI_API_KEY"]
~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
2024-05-19 16:09:48 - Translation file for en not found. Using default translation en-US.
^C%
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.