ai-yash / st-chat Goto Github PK
View Code? Open in Web Editor NEWStreamlit Component, for a Chatbot UI
Home Page: https://pypi.org/project/streamlit-chat/
License: MIT License
Streamlit Component, for a Chatbot UI
Home Page: https://pypi.org/project/streamlit-chat/
License: MIT License
Is it possible to have the bot respond with a image carousel. Instead of just one image, it would show the first image with a button on the right of the image that you can press two change to the next image and a button on the left to go to the previous image.
If it's not possible yet, it would be a nice feature to add.
Could you please allow the functionality for adding hyperlinks in st-chat?
Markdown displays image content and does not support PNG format images.
Images in PNG format cannot be displayed. But with images in JPG format, they can be displayed normally. Including avatars and message components
the CI/CD needs to be fixed, it's not working as expected.
Most common use case of the streamlit include the transformers implementation and the use of a context history in case of user chat implementation. Should I generate a PR of the same as an enhancement?
I am a noob here; I am building a simple survey chatbot using streamlit. I have a list of questions {Q1, Q2, … Qn} that I want to ask a user and collect the user’s response in the database. For each user’s response to each question, I also have one follow-up question (FQ1 that I want to ask further and collect responses. The follow-up question is generated on the fly through a hugging face model.
In essence, here is the following workflow I want to be able to see in the chat history:
Bot: Q1
User Response: [text input]
Bot: FQ1 (follow-up question generated on the fly through a model)
User Response: [text input]
Bot: Q2
User Response: [text input]
Bot: FQ2 (follow-up question generated on the fly)
User Response: [text input]
| |
| |
Bot: Q5
User Response: [text input]
Bot: FQ5 (follow-up question generated on the fly)
User Response: [text input]
For now, if we assume the questions and follow-up questions are pre-defined lists, can someone please help provide a vanilla code to solve the above? There are multiple examples of streamlit chat, but none of them deal with asking a pre-defined set of questions (something like a survey bot). Having a bunch of issues while playing through session variables. Any help is appreciated.
Hi, thanks for releasing this great streamlit extension. I found the link breaks (\n) in the message are ignored. Can you please make the message display the line breaks?
Hi. I created an app with streamlit_chat, and it works fine both locally and as a Docker container. But it generates the following error when I deploy it on Azure App Service:
Your app is having trouble loading the streamlit_chat.streamlit_chat component.
(The app is attempting to load the component from ****, and hasn't received its "streamlit
" message.)
relevant code:
import os
import streamlit as st
from streamlit_chat import message
import langchain
import requests
import json
from datetime import datetime, timedelta
import traceback
import pickle
import faiss
import warnings
from abc import abstractmethod
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from pydantic import Extra, Field, root_valida
def Page1():
qa , qa2 = load_chain()
if "chat_history" not in st.session_state:
st.session_state["chat_history"] = []
chat_history = st.session_state["chat_history"]
if "generated_chat" not in st.session_state:
st.session_state["generated_chat"] = []
if "past_chat" not in st.session_state:
st.session_state["past_chat"] = []
user_input = st.text_input("Please ask a question:", key="input_chat")
if user_input:
word_count = 0
for conversation in chat_history:
for text in conversation:
words = text.split()
word_count += len(words)
if word_count * 1.5 > 2000:
while word_count * 1.5 > 2000:
st.session_state["chat_history"].pop(0)
word_count = sum(len(text.split()) for conversation in st.session_state["chat_history"] for text in conversation)
chat_history = st.session_state["chat_history"]
utc_time = datetime.utcnow()
offset = timedelta(hours=3, minutes=30)
local_time = utc_time + offset
formatted_time = local_time.strftime("%Y-%m-%d %H:%M:%S")
try:
result = qa({"question": user_input, "chat_history": chat_history})
except Exception as e:
send_discord_message_errors(f"{formatted_time} An error occurred with page1: {str(e)} / Traceback: {traceback.format_exc()}")
send_discord_message(f"{formatted_time} Successful Request for page1.")
output = result["answer"]
chat_history.append((user_input, output))
st.session_state.past_chat.append(user_input)
st.session_state.generated_chat.append(output)
if st.session_state["generated_chat"]:
for i in range(len(st.session_state["generated_chat"]) - 1, -1, -1):
message(st.session_state["generated_chat"][i], key=str(i)+"_chat") #avatar_style="bottts"
message(st.session_state["past_chat"][i], is_user=True, key=str(i) + "_user_chat")
def Page2():
qa , qa2 = load_chain()
if "chat_history2" not in st.session_state:
st.session_state["chat_history2"] = []
chat_history2 = st.session_state["chat_history2"]
if "generated_history" not in st.session_state:
st.session_state["generated_history"] = []
if "past_history" not in st.session_state:
st.session_state["past_history"] = []
user_input = st.text_input("Please ask a question:", key="input_history")
if user_input:
word_count = 0
for conversation in chat_history2:
for text in conversation:
words = text.split()
word_count += len(words)
if word_count * 1.5 > 2000:
while word_count * 1.5 > 2000:
st.session_state["chat_history2"].pop(0)
word_count = sum(len(text.split()) for conversation in st.session_state["chat_history2"] for text in conversation)
chat_history2 = st.session_state["chat_history2"]
utc_time = datetime.utcnow()
offset = timedelta(hours=3, minutes=30)
local_time = utc_time + offset
formatted_time = local_time.strftime("%Y-%m-%d %H:%M:%S")
try:
result = qa2({"question": user_input, "chat_history": chat_history2})
except Exception as e:
send_discord_message_errors(f"{formatted_time} An error occurred with page2: {str(e)} / Traceback: {traceback.format_exc()}")
send_discord_message(f"{formatted_time} Successful Request for page2.")
output = result["answer"]
chat_history2.append((user_input, output))
st.session_state.past_history.append(user_input)
st.session_state.generated_history.append(output)
if st.session_state["generated_history"]:
for i in range(len(st.session_state["generated_history"]) - 1, -1, -1):
message(st.session_state["generated_history"][i], key=str(i)+"_history")
message(st.session_state["past_history"][i], is_user=True, key=str(i) + "_user_history")
# From here down is all the StreamLit UI.
st.set_page_config(page_title="Traders Chatbot", page_icon=":robot:")
app_mode = st.sidebar.selectbox("`Choose your person:`", ["Page1", "Page2"])
if app_mode == "Page1":
st.sidebar.image('page1.jpg', width=200)
try:
Page1()
except Exception as e:
st.write("Something went wrong. Please refresh the page clear the caches and try again.",e)
else:
st.sidebar.image('page2.jpg', width=200)
try:
Page2()
except Exception as e:
st.write("Something went wrong. Please refresh the page clear the caches and try again.",e)
I found the same error with another streamlit component:
https://discuss.streamlit.io/t/the-app-is-attempting-to-load-the-component-from-and-hasnt-received-its-streamlit-message/36968/10
and it looks like a problem that could be solved with a tuning proxy, but I have no idea how exactly it could be done.
The avatar argument in st.chat_message doesnt work with custom images.
When putting the streamlit in streamlit share instead of the custom image this is what is shown :
I'm passing the custom image like that
avatar = Pil.Image.open("static/avatar.jpg")
with` st.chat_message("assistant",avatar=avatar):
with st.spinner("Thinking..."):
response = "Something the bot say"
st.write(response)
I have also tried to read the images differently :
avatar = open("static/avatar.jpg", "rb").read()
avatar = np.array(Image.open("data/avatar.jpg"))
All the methods works fine when running streamlit in local but fails in streamlit share
Edit :
I have noticed that the little image of my bot is served this way in htmle
<img src="/media/121f19f90f4c90cef78b3269ceb860024967c4a93b3e1ceee23993e3.jpg"
<-- my image was named avatar.jpg
If i remove the "/" in the beginning in the html code of the page it works
<img src="media/121f19f90f4c90cef78b3269ceb860024967c4a93b3e1ceee23993e3.jpg"
Maybe it could be a clue
Could you change the example to a classic chat interface in example program?
Message history at the top, buttons at the bottom, the new message is empty and is added at the bottom on the first submit.
I understand that the example is made to magically solve a difficult streamlit problems: the inability to display not yet existing variables. But since you set the tone for many novice programmers, could you remake the example?
Another confirmation that people need it is here - #17
On the example.py we can see that we have a dialog box for the user and the bot, when the bot responds is it possible to create a submessage? i want to provide sources but only if you want to see the sub-message otherwise it will be hidden.
Is it possible with this library?
if st.session_state['generated']:
for i in range(len(st.session_state['generated'])-1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state['past'][i], is_user=True, key=str(i) + '_user')
Thanks
if st.session_state['generated']:
for i in range(len(st.session_state['generated'])-1, -1, 1):
message(st.session_state['past'][i], is_user=True, key=str(i) + '_user', avatar_style='personas')
message(st.session_state['generated'][i], key=str(i), avatar_style='identicon')
Using a positive step in the messages should be possible. However, it stays one behind if using a positive step (1),
The library seems to only work if step on the for loop is set to -1.
What should I do to automatically wrap continuous English content such as web_links
Hi! I'm very much sorry for polluting the issues with stupid questions, but how do I install the component from source?
The version on Pip is a little outdated, and installing it with pip install git+https://github.com/AI-Yash/st-chat
doesn't seem to change much. The repo also has dist
with some old tars that might need to be somehow recompiled...
I'm very much new to Streamlit, so I'd appreciate any help. Sorry again for wasting anyone time with this... Thank you for this useful little project.
I prefer so much put the input box to the bottom, reallllly dislike top, my neck will get broken to look so high.
When a user enters a prompt in st.chat_input, it is not possible to disable the input while an LLM is generating the response to that input.
This leads to the possibility of the user 'interrupting' the model output and messing up the conversation structure.
I've tried binding the disable argument to a variable inside the st.session_state, but it does not work.
Thanks for your great project!
I want to change the profile image on chat. How can I?
Sometime, we may ask the LLM same questions multiple time to get a better answer or check the consistency of the answers. If the user ask the same question twice. It will have the following errors.
Here is my test program.
import streamlit as st
from streamlit_chat import message
message("My message")
message("Hello bot!", is_user=True) # align's the message to the right
message("Hello bot!", is_user=True) # align's the message to the right ### this line causes problem
DuplicateWidgetID: There are multiple identical st.streamlit_chat.streamlit_chat widgets with the same generated key.
When a widget is created, it's assigned an internal key based on its structure. Multiple widgets with an identical structure will result in the same internal key, which causes this error.
To fix this error, please pass a unique key argument to st.streamlit_chat.streamlit_chat.
Could you please fix it or guide me how to fix it. Thank you for this excellent work!
Such a great component! I’m trying to use it to make a QA system, are you planning to add data table and picture in chat?
The default widths in the component are too wide to print, even on the official demo app. So you cannot print the right side of the conversation or use CSS (because content is in iframe). Any thoughts on a workaround to this?
Is there any way to display the images in chat format. I am trying to display the image with
message(st.session_state["generated"][i], key=str(i), avatar_style="thumbs")
but its displaying the error.
Really love the interface and was wondering if there is a way to clear the chat and refresh the page without the last message being submitted again in the demo file you created?
I would like a button that can completely clear the chat and start again.
Thanks!
It would be nice to easily change the default chat icon picture.
Hi ,
Could anyone help with how to add a copy to clip board button to the message pane?
I noticed that when a user inputs a message using the streamlit_chat library, the text is right-aligned instead of left-aligned, which is not the usual convention and looks very strange. Here is an example code snippet:
from streamlit_chat import message
message("""
write a poem with these words
Air
Sunlight
There are green leaves and wind, distant voices, an old Taoist priest standing in front of the temple
there is a mysterious power in his eyes, as if he can see through my heart.
""", is_user=True)
As you can see in the attached image, the user input is aligned to the right, which is not desirable. Can this be fixed to left-align the text instead? Thank you.
Thank you for adding the HTML functionality to the message
function. While using it, I came across few use-cases where the current front-end output needs improvement.
Using Collapsed sections with markdown,
import streamlit as st
from streamlit_chat import message
def on_input_change():
user_input = st.session_state.user_input
st.session_state.past.append(user_input)
st.session_state.generated.append("The messages from Bot\nWith new line")
def on_btn_click():
del st.session_state.past[:]
del st.session_state.generated[:]
markdown = "Below is an example of collapsible markdown taken from Wikipedia."
paragraph = "Euclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry; Elements. Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates) and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated earlier,[1] Euclid was the first to organize these propositions into a logical system in which each result is proved from axioms and previously proved theorems."
markdown += """<details><summary>Toggle me!</summary>{paragraph}</details>""".format(paragraph=paragraph)
st.session_state.setdefault('past', ['Show example of collapsible markdown'])
st.session_state.setdefault('generated', [{'type': 'normal', 'data': f'{markdown}'}])
st.title("Example")
with st.container():
for i in range(len(st.session_state['generated'])):
message(st.session_state['past'][i], is_user=True, key=f"{i}_user")
message(
st.session_state['generated'][i]['data'],
key=f"{i}",
allow_html=True,
is_table=True if st.session_state['generated'][i]['type']=='table' else False
)
st.button("Clear message", on_click=on_btn_click)
with st.container():
st.text_input("User Input:", on_change=on_input_change, key="user_input")
Can you please add functionality to support scrolling with message
function?
https://discuss.streamlit.io/t/scrolling-text-containers/26485/3
It will be also useful to have a fixed height container with scrolling for the whole chat functionality. Below is a Gradio example,
I created a chatbot for my website, and it is supposed to return working hyperlinks, but when the component shows a link, I cannot click it. In other components, that same information appears like links and are clickable, like in st.write.
Right now all response is seen in the form of code message. need to see the code in proper format
Hi,
Im trying to show audio component but nothing works.
I tried:
st.session_state.history.append({"message": st.audio(audio_bytes, format='audio/mp3'), "is_user": True, "key":str(uuid.uuid4())})
And:
st.session_state.history.append({"message": '<audio controls src="audio.mp3"></audio>', "is_user": True, "key":str(uuid.uuid4())})
The HTML show as text.
How to make it work?
Currently, choosing stop while the AI is generating a response throws an error similar to:
AttributeError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).
Traceback:
File "/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/app/st-chat/examples/chatbot.py", line 42, in <module>
st.session_state.past.append(user_input)
File "/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py", line 121, in __getattr__
raise AttributeError(_missing_attr_error_message(key))
Hi,
I am trying to add custom avatars to this, I go in to frontend/src/stChat.tsx and change avatarUrl, but my changes are not getting saved. Any idea on how to do this?
Would like to have examples/chatbot.py
running locally on computer
Running streamlit run chatbot.py
in terminals leads to --
KeyError: 'generated_text'
Traceback:
File "/opt/anaconda3/envs/st-chat/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script
exec(code, module.__dict__)
File "/Users/sivanchu/Documents/GitHub/st-chat/examples/chatbot.py", line 43, in <module>
st.session_state.generated.append(output["generated_text"])
Full sequence of code --
conda create -y -n st-chat python=3.8
conda activate st-chat
pip install streamlit-chat
streamlit run examples/chatbot.py
Hi,
I have noticed that the avatars are not displayed any more.
The reason behind is that https://www.dicebear.com/ has released a new api to get the avatars like:
https://api.dicebear.com/5.x/bottts/svg?seed=Loki
instead of
https://avatars.dicebear.com/api/bottts/Loki.svg
Could you please make changes accordingly?
Many thanks!
Hi.
I build a chatbot app to deploy in streamlit using streamlit_chat, but when i'm trying to build the app i get the following error:
from streamlit_chat import message
ModuleNotFoundError: No module named 'streamlit_chat'
¿what could be the problem?
Thank you!
Hi,
Does anyone know if I can display chatgpt-like streaming response in Streamlit using streamlit_chat -message?
I need something like message(streaming=True) or any other alternative for this. my code segment is as below:
from streamlit_chat import message
import streamlit as st
for i in range(len(st.session_state['generated']) - 1, -1, -1):
message(st.session_state['past'][i], is_user=True, key=str(i) + '_user')
message(st.session_state["generated"][i], key=str(i))
See yolopandas https://github.com/ccurme/yolopandas, the responses from chatbot are either dataframes or plots. How to use streamlit chat to show chatbot responses of dataframes and plots?
Hello!
I really enjoyed using your package, it made the display of text so much fun!
Just wondering, do you have plans to add a scroll bar to long conversations?
How could I set UTF-8 on message answer?
printing text on console looks like this:
Itausa irá pagar, em 30.08.2022, juros sobre o capital próprio no valor de R$ 0,12367 por ação, com retenção de 15% de imposto de renda na fonte, resultando em juros líquidos de R$ 0,1051195 por ação, excetuados dessa retenção os acionistas pessoas jurídicas comprovadamente imunes ou isentos.
using allow_html=True doesn't works.
looking at frontend, I guess the problem is with remarkMath, but I need to recopile component first to test
Are you planning to modify the background color for the message, so that a different color can be used for the user and the bot? This would be a nice alternative to using avatars.
Love the component. Would love the ability to pass in own image-url for the avatars.
Is it possible somehow to also show media like photos in messages?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.