Git Product home page Git Product logo

streamlit-feedback's Issues

Error with styles

When sending the feedback, I am getting this error:

FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.10/site-packages/streamlit_feedback/frontend/build/bootstrap.min.css.map'

User need to click again to persist the action.

Thanks.

Saving user response and prompt into different tables based on the feedback

I am trying to built a chatbot where I would like to save the response to two different tables good_response and bad_response based on which thumb icon is pressed. I see that I can use on_submit parameter of streamlit_feedback but I am only able to write to one of the tables

def good_response(conn, input, response):
    cur = conn.cursor()
    cur.execute("INSERT INTO good_responses (question, response) VALUES (%s, %s)", (input, response))
    conn.commit()
    cur.close()

def bad_response(conn, input, response):
    cur = conn.cursor()
    cur.execute("INSERT INTO bad_responses (question, response) VALUES (%s, %s)", (input, response))
    conn.commit()
    cur.close()

with st.expander("Conversation", expanded=True):               
    for i in range(len(st.session_state['generated'])-1, -1, -1):
        st.info(st.session_state["past"][i], icon="๐Ÿง")
        st.code(st.session_state["generated"][i])
        feedback = streamlit_feedback(feedback_type=f"thumbs", on_submit=good_response(conn, user_input, answer), key=f"feedback_{i}", optional_text_label="Please type the answer here if the given answer in not correct")
        feedback
        #st.button("๐Ÿ‘", on_click=good_response, type='primary', key=f"thumbs_up_{i}", args=(conn, user_input, answer))
        #st.button("๐Ÿ‘Ž", on_click=bad_response, type='primary', key=f"thumbs_down_{i}", args=(conn, user_input, answer)) 
        st.header('Follow up questions:')

I have issus for save my feedback

Hi everyone. Sorry to bother you but I'm trying to understand why it doesn't save me the feedback that the user leaves depending on the responses that the model returns.
Below I leave you the code.

Can you give me a hand?

def clear_chat_history():
    st.session_state.messages = [{"role": "assistant", "content": welcome_message}]

# Retrieval Augmentation Chat Chain
qa_chain = build_chat_rag_chain(template_condense_question_prompt=prompt_template_2)


collector = FeedbackCollector()

chat_history = []


welcome_message = "Hello I am a friendly chatbot ๐Ÿค– helping you to answer questions related to accidents ๐Ÿ“"


st.title("๐Ÿ’ฌ Your Insurtech document assistant powered by AWS โ˜๏ธ")

if "messages" not in st.session_state.keys():
    st.session_state.messages = [{"role": "assistant", "content": welcome_message}]

messages = st.session_state.messages

for msg in messages:
    st.chat_message(msg['role']).write(msg["content"])

prompt = st.chat_input()
if prompt:
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.write(prompt)

    response = qa_chain({"question": prompt, "chat_history": chat_history})

    response_message = response.get("answer", "Sorry I can\t help you")
    response_sources = response.get("source_documents")

    if response_sources:

        source_docs = '\n'.join(set([f"- source:\t{doc.metadata['source']}" for doc in response_sources]))
        # response_message += f'\n\nSOURCES\n{source_docs}'
        st.sidebar.subheader(f"Sources for Answer {prompt}")
        st.sidebar.write(source_docs)

    chat_history.append([prompt, response_message])

    with st.chat_message("assistant"):
        with st.spinner("Thinking..."):

            st.markdown(response_message)
            st.write("Was it helpful to you?")

        if response_sources:

            feedback = streamlit_feedback(feedback_type="thumbs",
                                          on_submit={"question": prompt, "answer": response_message})

            feedback

    st.session_state.messages.append({"role": "assistant", "content": response_message})





st.button('Clear Chat History', on_click=clear_chat_history)

Why do i have to login on Trubrics?

"Why is it necessary to log in to Trubrics?"
I don't want to share my sensitive data with Trubrics.
How can I use all the features [Admin + Analytics UI] of Trubrics
without using any login on Trubrics?

Change the text of "submit"

I love this feedback, great thinking and usage.

Is there a way to change the text of "submit" ? My user are kind of not english speakers.... Could be great to have an option to change the text, just like optional_text_label

Feedback is not getting cleared. Only 1st feedback getting saved

After user gives a feedback based on a response. The feedback doesn't get cleared for another feedback on another response. ANy help?

import streamlit as st
from streamlit import session_state as ss
from streamlit_feedback import streamlit_feedback

if 'qr' not in ss:
    ss.qr = None


def ai_response(query):
    ss.qr = "Hey, what's up?" 


with st.form("query_form_k", clear_on_submit=False):
    query = st.text_area("Enter your query")
    st.form_submit_button('Get response', on_click=ai_response, args=(query,))

with st.container():
    st.write('Ai Response:')

    if ss.qr is not None:
        st.write(ss.qr)

with st.container():
    st.write('Your feedback:')

    feedback = streamlit_feedback(
        feedback_type="thumbs",
        optional_text_label="[Optional] Please provide an explanation",
        align="flex-start",
        args=(query),
        key='c1'
    )

    if ss.qr is not None and feedback:
        st.write(feedback)


image

On submit is not working (it is not calling handle_feedback method)

I am trying to add feedback button and capturing in the logs

Package
streamlit==1.32.2
requests==2.31.0
streamlit-feedback==0.1.3

Code :

def handle_feedback(user_response, emoji=None):
    st.toast(f"Feedback submitted: {user_response}", icon=emoji)
    print(f"test : {user_response}")
    return user_response.update({"some metadata": 123})


feedback_kwargs = {
        "feedback_type": "thumbs",
        "on_submit":handle_feedback
        }
# Generate a new response if last message is not from assistant
if st.session_state.messages[-1]["role"] != "assistant" and st.session_state.messages[-1]["role"] != "system":
    with st.chat_message("assistant"):
        with st.spinner("Thinking..."):
            # log question
            # get response
            chat_history, output = generate_response(prompt)
            placeholder = st.empty()
            full_model_response = ''
            for item in output:
                full_model_response += item
                placeholder.markdown(full_model_response)
            placeholder.markdown(full_model_response)
            streamlit_feedback(
            **feedback_kwargs, key=f"feedback_{int(len(st.session_state.messages)/2)}")

    message = {"role": "assistant", "content": full_model_response}
    st.session_state.messages.append(message)
    streamlit_root_logger.info('=====================================================\n')

ๅ้ฆˆๆŒ‰้’ฎๅœจๅคš็”จๆˆท้—ฎ็ญ”ๆ—ถ๏ผŒๅŽ้—ฎ็ญ”ๆ— ๆณ•ๆ˜พ็คบๅ้ฆˆๆŒ‰้’ฎๅนถๆŠฅ้”™ Bad 'setIn' index

ๅคš็”จๆˆทๅŒๆ—ถ้—ฎ็ญ”ๆ—ถ๏ผŒๆŠฅ้”™
Bad message format
Bad 'setIn' index 3 (should be between [0, 2])

ๅ…ˆ้—ฎ็ญ”็š„ๆŒ‰้’ฎๅฏไปฅๆ˜พ็คบ๏ผŒๅŽ้—ฎ็ญ”็š„ๆ— ๆณ•ๆ˜พ็คบๅนถๆŠฅ้”™
liunux4odoo/streamlit-chatbox#6

Show Feedback only when hovering over main container

Would this be something that is possible? I am trying to attempt to do so with a style sheet

.MuiBox-root css-mz9qt7 .MuiStack-root css-16ogmd7 {
    display: none;
}

.MuiBox-root css-mz9qt7:hover .MuiStack-root css-16ogmd7 {
    display: none;
}

But it's not working. I am not sure if it's possible to be done within my own script.

Background Color Formatting Issue

Hi, the widget has been awesome but I noticed on the chatbot that the background color of the thumbs up/down tool does not match right. I was trying to look for a way to customize the styling properties, but couldn't.

See as such:
image

I can get rid of the formatting through inspecting but not sure how this is accessed through code.

Changing BG Color of the Streamlit Feedback Component?

Hi, I've tried inspecting the element and changing the bg color to see which component I needed to modify but it looks like the bg color is inherited from the primary bg color base of Streamlit. Is there a property I can modify to change this separately? I see an iFrame but I am not familiar with web development.

As you can see, the feedback inherits the grey bg color. I'd like it to be white.

image

Trubrics saves only the first feedback and nothing more in Streamlit LLM chatbot

I try to implement an LLM chatbot with Trubrics feedback, it seems to work for the first feedback when I run it locally, but it does not save the feedback from the second try and so on.

What could be the problem here?

uploaded_files = st.sidebar.file_uploader(
    label="Upload PDF files", type=["pdf"], accept_multiple_files=True
)


retriever = configure_retriever(uploaded_files)

# Setup memory for contextual conversation
msgs = StreamlitChatMessageHistory()
memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=msgs, return_messages=True)

# Setup LLM and QA chain
llm = AzureChatOpenAI(callbacks=[trubrics_callback], deployment_name="********",
        model_name="gpt-35-turbo", temperature=0, max_tokens=1024)
qa_chain = ConversationalRetrievalChain.from_llm(
    llm, retriever=retriever, memory=memory, verbose=True
)

if len(msgs.messages) == 0 or st.sidebar.button("Clear message history"):
    msgs.clear()
    msgs.add_ai_message("Hello! I am a Financial Analyst, ready to respond in the language your question is asked in.")

avatars = {"human": "user", "ai": "assistant"}
for msg in msgs.messages:
    st.chat_message(avatars[msg.type]).write(msg.content)
source = st.sidebar.checkbox('I want only the final anwer!')
if user_query := st.chat_input(placeholder="Ask me anything about financial analysis and related questions!"):
    st.chat_message("user").write(user_query)

    with st.chat_message("assistant"):
        retrieval_handler = PrintRetrievalHandler(st.container())
        stream_handler = StreamHandler(st.empty())
        response = qa_chain.run(user_query, callbacks=[retrieval_handler, stream_handler])
        if source:
            st.experimental_rerun()

        st.rerun()

for msg in msgs.messages:
   sender = avatars[msg.type]
   
   content = msg.content
   metadata = {
       "sender": sender,
       
       "content": content
   }

from trubrics.integrations.streamlit import FeedbackCollector

collector = FeedbackCollector(
    project="********",
    email="*******@************",
    password="**********"
)

feedback = collector.st_feedback(
    component="default",
    feedback_type="thumbs",
    model="gpt-3.5-turbo",
    prompt_id=None,  # see prompts to log prompts and model generations
    open_feedback_label='[Optional] Provide additional feedback',
    metadata=metadata
)
if feedback:
    with st.sidebar:
        st.write(":orange[Here's the raw feedback you sent to [Trubrics](https://trubrics.streamlit.app/):]")
        st.write(feedback)

this comes up in the terminal after the first feedback submitted but never again:

2024-01-17 12:14:52.499 | INFO | trubrics.platform:log_feedback:160 - User feedback saved to Trubrics.

Best way to add streamlit-feedback into chatbot

Thank you for awesome package. I was able to add streamlit-feedback into chatbot app via st.form:

def handle_feedback():  
    st.write(st.session_state.fb_k)
    st.toast("โœ”๏ธ Feedback received!")

....

        with st.form('form'):
            streamlit_feedback(feedback_type="thumbs",
                                optional_text_label="[Optional] Please provide an explanation", 
                                align="flex-start", 
                                key='fb_k')
            st.form_submit_button('Save feedback', on_click=handle_feedback)

It works but there two problems:

  1. To get it work user first need click on SUBMIT and only then to "Save feedback".
    image

If user click "Save feedback" then st.session_state.fb_k will be None

  1. Feedback inside st.form does not look very good and I am looking to ways to get rid of st.form but still have the same functionaly.

I looked in examples.py in the repo and similar issues but did not find anything that would help to resolve the problem.

Full app code:

from langchain.chat_models import AzureChatOpenAI
from langchain.memory import ConversationBufferWindowMemory # ConversationBufferMemory
from langchain.agents import ConversationalChatAgent, AgentExecutor, AgentType
from langchain.callbacks import StreamlitCallbackHandler
from langchain.memory.chat_message_histories import StreamlitChatMessageHistory
from langchain.agents import Tool
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
import pprint
import streamlit as st
import os
import pandas as pd
from streamlit_feedback import streamlit_feedback

def handle_feedback():  
    st.write(st.session_state.fb_k)
    st.toast("โœ”๏ธ Feedback received!")

  
os.environ["OPENAI_API_KEY"] = ...
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = ...
os.environ["OPENAI_API_VERSION"] = "2023-08-01-preview"


@st.cache_data(ttl=72000)
def load_data_(path):
    return pd.read_csv(path) 

uploaded_file = st.sidebar.file_uploader("Choose a CSV file", type="csv")
if uploaded_file is not None:
    # If a file is uploaded, load the uploaded file
    st.session_state["df"] = load_data_(uploaded_file)


if "df" in st.session_state:

    msgs = StreamlitChatMessageHistory()
    memory = ConversationBufferWindowMemory(chat_memory=msgs, 
                                            return_messages=True, 
                                            k=5, 
                                            memory_key="chat_history", 
                                            output_key="output")
    if len(msgs.messages) == 0 or st.sidebar.button("Reset chat history"):
        msgs.clear()
        msgs.add_ai_message("How can I help you?")
        st.session_state.steps = {}
    avatars = {"human": "user", "ai": "assistant"}
    for idx, msg in enumerate(msgs.messages):
        with st.chat_message(avatars[msg.type]):
            # Render intermediate steps if any were saved
            for step in st.session_state.steps.get(str(idx), []):
                if step[0].tool == "_Exception":
                    continue
                # Insert a status container to display output from long-running tasks.
                with st.status(f"**{step[0].tool}**: {step[0].tool_input}", state="complete"):
                    st.write(step[0].log)
                    st.write(step[1])
            st.write(msg.content)


    if prompt := st.chat_input(placeholder=""):
        st.chat_message("user").write(prompt)

        llm = AzureChatOpenAI(
                        deployment_name = "gpt-4",
                        model_name = "gpt-4",
                        openai_api_key = os.environ["OPENAI_API_KEY"],
                        openai_api_version = os.environ["OPENAI_API_VERSION"],
                        openai_api_base = os.environ["OPENAI_API_BASE"],
                        temperature = 0, 
                        streaming=True
                        )

        prompt_ = PromptTemplate(
            input_variables=["query"],
            template="{query}"
        )
        chain_llm = LLMChain(llm=llm, prompt=prompt_)
        tool_llm_node = Tool(
            name='Large Language Model Node',
            func=chain_llm.run,
            description='This tool is useful when you need to answer general purpose queries with a large language model.'
        )

        tools = [tool_llm_node] 
        chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)

        executor = AgentExecutor.from_agent_and_tools(
                                                        agent=chat_agent,
                                                        tools=tools,
                                                        memory=memory,
                                                        return_intermediate_steps=True,
                                                        handle_parsing_errors=True,
                                                        verbose=True,
                                                    )
        

        with st.chat_message("assistant"):            
            
            st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
            response = executor(prompt, callbacks=[st_cb, st.session_state['handler']])
            st.write(response["output"])
            st.session_state.steps[str(len(msgs.messages) - 1)] = response["intermediate_steps"]
            response_str = f'{response}'
            pp = pprint.PrettyPrinter(indent=4)
            pretty_response = pp.pformat(response_str)
              

        with st.form('form'):
            streamlit_feedback(feedback_type="thumbs",
                                optional_text_label="[Optional] Please provide an explanation", 
                                align="flex-start", 
                                key='fb_k')
            st.form_submit_button('Save feedback', on_click=handle_feedback)


Force the feedback component to disappear after feedback is submitted or if there is a new message from the assistant

Hey everyone! First off, thanks for this widget. It seems to be a good step towards collecting feedback in quickly prototyped ML apps.

I was using the component in the same way as described in the example for a chat application. However, I notice that the feedback component remains after the next message, either showing us the final state of the feedback(thumbs up or thumbs down) or just showing the component without anything submitted.

Is it possible to force the component to disappear if nothing was entered?

For more information, here is a reproducible example. I have removed the calls to CosmosDB on Azure and replaced the feedback submission to a text file. I also make use of loguru for logging, streamlit_authenticator to set a password for the application. I make use of Python 3.8.10, my streamlit version is 1.30.0, streamlit-authenticator is 0.2.3, streamlit-feedback is 0.1.3 and loguru is 0.7.2.

import os
import time
from yaml.loader import SafeLoader

import streamlit as st
import yaml
from loguru import logger
from openai import OpenAI

import streamlit_authenticator as stauth
from utils import store_feedback_in_text_file
from streamlit_feedback import streamlit_feedback



logger.add("output_2.txt", format="{time} | {level} | {message}", filter="my_module", level="INFO")


OPEN_AI_API_KEY = os.environ["OPENAI_API_KEY"]
ASSISTANT_ID = os.environ["OPENAI_ASSISTANT_ID"] # existing assistant


with open('stauth_config.yaml') as file:
    config = yaml.load(file, Loader=SafeLoader)


authenticator = stauth.Authenticate(
    config['credentials'],
    config['cookie']['name'],
    config['cookie']['key'],
    config['cookie']['expiry_days'],
    config['preauthorized']
)

client = OpenAI(api_key=OPEN_AI_API_KEY)

st.session_state["name"], st.session_state["authentication_status"], st.session_state["username"] = authenticator.login('Login', 'main')

if st.session_state["authentication_status"]:
    authenticator.logout('Logout', 'main')
    st.title("Koppert Digital Assistant")

    ## initialize a new conversation
    if "thread" not in st.session_state:
        st.session_state["thread"] = client.beta.threads.create()

    if "messages" not in st.session_state:
        st.session_state.messages = []
        
    if "message_index" not in st.session_state:
        st.session_state.message_index = 0
        logger.info(f"message_index: {st.session_state.message_index} [initialization]")

    logger.info(f"message_index: {st.session_state.message_index}")

    for i, message in enumerate(st.session_state.messages):
        with st.chat_message(message["role"]):
            st.markdown(message["content"])
        
        if message["role"] == "assistant" and i > 0:
            feedback_key = f"feedback_{int(i / 2)}"

            if feedback_key not in st.session_state:
                st.session_state[feedback_key] = None

            feedback = streamlit_feedback(
                feedback_type="thumbs",
                optional_text_label="[Optional] Please provide an explanation",
                on_submit=store_feedback_in_text_file,
                align="flex-end",
                key=feedback_key
            )
            logger.info(f"feedback: {feedback}")    
            
    if prompt := st.chat_input("Ask me a question :)"):
        st.session_state.messages.append({"role": "user", "content": prompt})
        st.session_state.message_index += 1
        logger.info(f"message_index: {st.session_state.message_index}")
        
        ## Insert item into CosmosDB
        if st.session_state.message_index == 1:
            logger.info("Inserted chat details into chats container in CosmosDB")
        
        ## Insert item into CosmosDB
        logger.info("Inserted question into messages container in CosmosDB")
        
        
        with st.chat_message("user"):
            st.markdown(prompt)
            
        message = client.beta.threads.messages.create(
        thread_id=st.session_state.thread.id,
        role="user",
        content=prompt)
        logger.info("Sent new prompt to OpenAI")
        
        with st.chat_message("assistant"):
            message_placeholder = st.empty()
            full_response = ""
            
            run = client.beta.threads.runs.create(
                thread_id=st.session_state.thread.id,
                assistant_id=ASSISTANT_ID)
            
            # check status of run, if completed, continue, if not, check again in 1 second
            logger.info("Polling OpenAI for response")
            while run.status != "completed":
                run = client.beta.threads.runs.retrieve(
                    thread_id=st.session_state.thread.id,
                    run_id=run.id
                )
                time.sleep(2)
                logger.info("Waiting for OpenAI to respond")
                message_placeholder.markdown(full_response + "โ–Œ")
            
            messages = client.beta.threads.messages.list(thread_id=st.session_state.thread.id)
            
            logger.info("Received response from OpenAI")
            logger.info(f"messages.data: {messages.data}")
            
            full_response = messages.data[0].content[0].text.value
            st.session_state.message_index += 1
            logger.info(f"message_index: {st.session_state.message_index}")
            
            logger.info("Inserting OpenAI response into messages container in CosmosDB")

            message_placeholder.markdown(full_response)
        

        st.session_state.messages.append({"role": "assistant", "content": full_response})
        st.rerun()

elif st.session_state["authentication_status"] == False:
    st.error('Username/password is incorrect')
elif st.session_state["authentication_status"] == None:
    st.warning('Please enter your username and password')
def store_feedback_in_text_file(feedback, file_path="feedback.txt"):
    if 'score' in feedback:
        thumbs = feedback['score']
        score = {"๐Ÿ‘": "Good", "๐Ÿ‘Ž": "Bad"}[thumbs]
        feedback['final_score'] = score
        
    with open(file_path, 'a') as file:
        file.write(f"Feedback Score: {feedback.get('final_score', 'None')}\n")
        file.write(f"Feedback Text: {feedback.get('text', 'None')}\n")

Let me know if there is anything that can be done to have the feedback component disappear if there are new messages.
Cheers!

Unable to record the user input and store in variable

I have made changes to the code, but it seems that there is an issue when using it with st.chat_input. The example code works correctly without it, but when integrated with st.chat_input, it returns None as the value. I have reviewed the modified code to identify any mistakes, but the problem persists. As I need to store user input in a variable and check actions accordingly.
below is the sample code
import streamlit as st
from streamlit_feedback import streamlit_feedback

def _submit_feedback(user_response, emoji=None):
st.toast(f"Feedback submitted: {user_response}", icon=emoji)
return user_response.update({"some metadata": 123})

def chatbot_thumbs_app(streamlit_feedback, debug=False):
st.title("๐Ÿ’ฌ Chatbot")
openai_api_key="sadd"
if "messages" not in st.session_state:
st.session_state["messages"] = [
{"role": "assistant", "content": "How can I help you?"}
]

messages = st.session_state.messages

for n, msg in enumerate(messages):
    st.chat_message(msg["role"]).write(msg["content"])

    if msg["role"] == "assistant" and n > 1:
        feedback_key = f"feedback_{int(n/2)}"

        if feedback_key not in st.session_state:
            st.session_state[feedback_key] = None

        test=streamlit_feedback(
            feedback_type="thumbs",
            optional_text_label="Please provide extra information",
            on_submit=_submit_feedback,
            key=feedback_key,
        )
        print(test)
    
if prompt := st.chat_input():
    messages.append({"role": "user", "content": prompt})
    st.chat_message("user").write(prompt)

    if debug:
        st.session_state["response"] = "dummy response"
    else:
        if not openai_api_key:
            st.info("Please add your OpenAI API key to continue.")
            st.stop()
        else:
            pass
        response = "test"
        st.session_state["response"] = "test"
    with st.chat_message("assistant"):
        messages.append(
            {"role": "assistant", "content": st.session_state["response"]}
        )
        st.write(st.session_state["response"])
        st.rerun()

chatbot_thumbs_app(streamlit_feedback)

image

Streamlit doesn't persist the feedback submission as highlighted thumbs up/thumbs down

I'm seeing a bug where when I submit feedback for the first time, streamlit doesn't persist the feedback submission as highlighted thumbs up/thumbs down. On the UI it gets reset. On subsequent retries, it persists the feedback submission and disables further submission. Any idea why that maybe happening? Thanks!
first-time submit feedback:
image
thumbs up/thumbs down gets reset on UI, but first-time feedback is successfully uploaded to db:
image
second-time submit feedback:
image
UI persists and disable further submission, feedback is not uploaded to db (as expected because we only want to upload the first-time feedback):
image

streamlit_feedback(
    feedback_type="thumbs",
    optional_text_label="[Optional] Please provide an explanation/suggestion",
    max_text_length=512,
    align="flex-start",
    key=f"feedback_{component}_{job_id}",
    on_submit=submit_feedback,
    args=(table, job_id, component),
)

Ask textual feedback only if bad review

Description

Is it possible to ask for textual feedback only if the review is negative or is not the best emoji review ?
Good feedback might be less interesting that bad feedback

streamlit_feedback disappears within expander

Streamlit version: 1.32.2

Issue: We have an issue where when we generate a query and the streamlit_feedback component is within a non-expanded expander, the thumbs/smileys are non existent in the UI. We have to reload with it expanded.

Sample code:

            st.write_stream(stream_data(query_result))
            with st.expander("Feedback"):
                feedback_result = streamlit_feedback(
                    feedback_type="thumbs",
                    optional_text_label="[Optional] Please provide an explanation",
                    align="flex-start",
                    on_submit=handle_feedback,
                    args=(query_text, query_result,),
                    key=f"feedback_{query_text}"
                )

                if feedback_result:
                    st.session_state.feedback.append(feedback_result)

Thumb buttons aren't visible with streamlit==1.32.2

Hi! I'm building an app and hosting it on the Community Cloud with streamlit==1.32.2. I encountered an issue, that feedback buttons aren't visible while deployed on the Cloud. The difference causing that between my local machine and the Cloud is the streamlit version. So I've updated the local version to be the same as in Cloud (streamlit==1.32.2) and buttons aren't visible. I tried a workaround specifying streamlit==1.32.1 in the requirements, but the Cloud ignores that and doesn't seem to allow to downgrade the streamlit to 1.32.1. so I'm hoping you could help with fixing button visibility for streamlit==1.32.2. Thanks

Streamlit_feedback not stored to S3. I would like to store the user_input, model generated_content, feedback entered by user in the UI to S3. Please help modify my code

def store_data_in_s3(data, filename):
    try:
        s3 = boto3.client('s3', region_name="us-east-1")
        s3.put_object(Body=json.dumps(data), Bucket='genai', Key=filename)
    except Exception as e:
        print(f"An error occurred while storing data in S3: {e}")

def store_feedback(feedback,user_input):
    face = feedback['score']
    score = {"๐Ÿ˜€": 5, "๐Ÿ™‚": 4, "๐Ÿ˜": 3, "๐Ÿ™": 2, "๐Ÿ˜ž": 1}[face]
    comment = feedback['text'] or "none"
    feedback_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    feedback_data = {
        "user_input": user_input,
        "feedback_score": score,
        "feedback_text": comment,
        "timestamp": feedback_time
    }
    store_data_in_s3(feedback_data, f"feedback_{user_input}_{feedback_time}.json")
    st.success("Thank you for your feedback!")
     
    
# define the main function
def main():

    tab1 = st.tabs(["**Short Titles**" ])

    with st.sidebar:
        params = app_sidebar()

    with tab1:
        input_asin = st.text_input('Please input asin :', '', key='Titles input')

        if st.button('Submit Asin', key='Titles button') and len(input_asin) > 5:
            st.write('---')
            asin = str(input_asin)
            if is_asin_present(asin):
                long_title = df[df['bundle_asin'] == asin]['item_name'].values[0]
                st.write(f"**Current Asin Title:** {long_title}")
                st.write('---')

                models = [{'name': 'Anthropic Claude-v2', 'function': short_titles.claude_short_title, 'prompt': gpt_st_prompt}]

                for model_info in models:
                    generate_content.generate_short_title_content(model_info['name'], model_info['function'], df, input_asin, model_info['prompt'], params)
                       
                    feedback = streamlit_feedback(
                        feedback_type="faces",
                        optional_text_label="[Optional] Please provide an explanation",
                        on_submit=lambda fb: store_feedback(fb, input_asin),
                    )
            else:
                st.error(f"ASIN '{asin}' not found in the Gen AI database. Please enter a valid ASIN.")

file_uploader return 400 after user submit feedback

Hi, we are seeing AxiosError: Request failed with status code 400 on the file_uploader when ever we use the streamlit-feedback. We saw this issue before and it was because we didn't enable sticky session. Now it is happen again when we use the streamlit-feedback I feel streamlit-feedback route the traffic to a different instance.

Feedback not collected

def submit_feedback(user_feedback, prompt, response):
    st.toast(f"Feedback submitted โœ…")
    user_feedback["datetime"] = datetime.now().isoformat()
    return user_feedback

if "feedback" not in st.session_state:
    st.session_state.feedback = []
    st.session_state.feedback_key = 0

if prompt := st.chat_input("Write your prompt here"):
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

    full_response = generate_response(prompt=prompt)
    st.markdown(full_response)
    
    st.session_state.feedback_key += 1
    st.session_state.feedback.append(streamlit_feedback(
        feedback_type="faces",
        optional_text_label="[Optional] Please provide an explanation",
        on_submit=submit_feedback,
        kwargs={"prompt": prompt, "response": full_response},
        align="flex-end",
        key=st.session_state.feedback_key,
    ))
    st.session_state.messages.append({"role": "assistant", "content": full_response, "datetime" : datetime.now().isoformat()})

I expect this to collect feedback but when I click one of the faces, the st.toast does not appear and the feedback is never collected. However, the front-end components are there to click and textbox will appear for optional feedback. But on click it seems like it suddenly "refreshes" and the feedback faces along with the textbox just disappear, while the prompt and the response is still there and I could prompt it again, same issues happen all over again.

I already tried the simple way without putting the feedback into st.session_state, also the same issue, on click the on_submit function never really ran

When using submit button ! Streamlit chat bot is not showing success message

I am using streamlit_feedback in my chat app , but i m not able to print any success messages, when a feedback is submitted , I also want to store the in a Mongo db , can someone help me in that, it would be great

As soon as I am clicking the submit button , it is re running the app without any toast alert!

This is my code snippet

def handle_feedback():
            st.write(st.session_state.fb_k)
            st.toast("โœ”๏ธ Feedback received!")

if prompt := st.chat_input(" Ask your document?"):
        st.chat_message("user").markdown(prompt)
        st.session_state.messages.append({"role": "user", "content": prompt})
        with st_lottie_spinner(loading_animation, quality='high', height='70px', width='70px'):
            ans,unique_links = get_response_from_django(prompt)
            with st.chat_message("assistant"):
                st.markdown(ans)

        streamlit_feedback(
                        feedback_type="thumbs",
                        optional_text_label="[Optional] Please provide an explanation",
                        align="flex-start",
                        key=f"fb_k",
                        on_submit=handle_feedback,
            )

component_value in __init__.py is always None

I've tried several different ways to follow the chatbot examples, but each time, I can't get the feedback to make its way back to langsmith because the component_value is always None, which bypasses the part of the code that sends the feedback to langsmith. I'm stumped, as is our CTO. Any help is much apprediated. Here's the current iteration of the chatbot:

`from langchain_community.chat_message_histories import StreamlitChatMessageHistory
import streamlit as st
from streamlit_feedback import streamlit_feedback
from streaming import StreamHandler
import os
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from langchain.callbacks.manager import collect_runs
from prompts import core_content, history_input, prompt_assembler
from utils import gather_feedback
from langsmith import Client
import utils
import pdb

utils.configure_openai_api_key()

st.set_page_config(page_title="Market Training Bot", page_icon="โญ")
st.header('Framework Selector')

if 'name' not in st.session_state:
st.session_state.name = None
if 'role' not in st.session_state:
st.session_state.role = None
if 'framework' not in st.session_state:
st.session_state.framework = None
if 'tone' not in st.session_state:
st.session_state.tone = None
if 'feedback_key' not in st.session_state:
st.session_state.feedback_key = 0
if 'count' not in st.session_state:
st.session_state.count = 0

def increment_counter():
st.session_state.count += 1

def _submit_feedback(user_response, emoji=None):
st.toast(f"Feedback submitted: {user_response}", icon=emoji)
return user_response.update({"some metadata": 123})

feedback_kwargs = {
"feedback_type": "thumbs",
"optional_text_label": "Please provide extra information",
"on_submit": _submit_feedback,
}

def clear_cache():
keys = list(st.session_state.keys())
for key in keys:
st.session_state.pop(key)

def update_params():
st.session_state.name = st.session_state.name_val
st.session_state.role = st.session_state.role_val
st.session_state.framework = st.session_state.framework_val
st.session_state.tone = st.session_state.tone_val

if st.session_state['role'] is None or st.session_state['framework'] is None or st.session_state['tone'] is None:
with st.form(key='virtual_sales_call_form'):
st.selectbox(
'Choose your name:',
('Amy', 'Brad', 'Chase', 'Mike', 'Scott', 'Other'),
index=None,
key='name_val',
placeholder='Choose an option',
)
st.selectbox(
'Choose role:',
('VP Periop', 'Hospital Surgeon Leader', 'Hospital CFO', 'Hospital CEO', 'ASC Surgeon Equity Partner', 'ASC Administrator'),
index=None,
key='role_val',
placeholder='Choose an option',
)

    st.selectbox(
        'Choose sales framework:',
        ('SPIN', 'Sandler', 'Challenger', 'Miller Heiman'),
        index=None,
        placeholder='Choose an option',
        key='framework_val',
    )
    st.selectbox(
        'Choose tone:',
        ('nice', 'medium', 'tough'),
        index=None,
        placeholder='Choose an option',
        key='tone_val',
    )
    submit_button = st.form_submit_button(label='Submit', on_click=update_params)

def main():
history = []

if st.session_state.get('role') is None or st.session_state.get('framework') is None or st.session_state.get('tone') is None or st.session_state.get('name') is None:
    return

msgs = StreamlitChatMessageHistory()

if len(msgs.messages) == 0:
    msgs.add_ai_message(f"Hello! I'm your {st.session_state.tone} conversation virtual {st.session_state.role}. Let's get started.")
    
prompt_text = prompt_assembler(st.session_state.role, st.session_state.framework, st.session_state.tone, core_content)
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", prompt_text),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{question}"),
    ]
)
model = ChatOpenAI(
    model="gpt-4-1106-preview", 
    temperature=0, 
    tags=[tag for tag in [st.session_state.name, "virtual_sales_encounter"] if tag is not None]
)
chain = prompt | model

chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: msgs,  # Always return the instance created earlier
    input_messages_key="question",
    history_messages_key="history",
)

for msg in msgs.messages:
    
    st.chat_message(msg.type).write(msg.content)

    n = st.session_state.count

    if msg.dict()['type'] == 'ai' and n > 1:
    # if msg["role"] == "assistant" and n > 1:
        feedback_key = f"feedback_{st.session_state.count}"

        if feedback_key not in st.session_state:
            st.session_state[feedback_key] = None

        disable_with_score = (
            st.session_state[feedback_key].get("score")
            if st.session_state[feedback_key]
            else None
        )
        streamlit_feedback(
            **feedback_kwargs,
            key=feedback_key,
            disable_with_score=disable_with_score,
        )
    
if prompt := st.chat_input():
    with collect_runs() as cb:
        if not cb.traced_runs or not st.session_state.get('run_id') == 'first_run':
            run_id = 'first_run'
        else:
            run_id = cb.traced_runs[0].id
        st.chat_message("human").write(prompt)
        config = {"configurable": {"session_id": "any"}}
        placeholder = st.empty()  # Create a placeholder for the text
        response_text = ""  # Initialize an empty string
        for s in chain_with_history.stream(
            {"history": history, "question": prompt}, 
              config={
                  "metadata": {
                      "run_id": run_id if st.session_state.get('run_id') is None else st.session_state.run_id
                  }, 
                  "configurable": {
                      "session_id": st.session_state.get('session_id', 'default_session_id')
                  }
              }
        ):
            
            s = s.to_json()
            s = s['kwargs']['content']
            formatted_s = '\n'.join(s[i:i+80] for i in range(0, len(s), 80)) # Insert a newline character every 80 characters
            response_text += formatted_s
            placeholder.markdown(response_text)
        
        if 'run_id' not in st.session_state:
            st.session_state.run_id = run_id
        if st.session_state.get('run_id') == 'first_run' and cb.traced_runs:
            st.session_state.run_id = cb.traced_runs[0].id
        # st.session_state.feedback_id = cb.traced_runs[0].id
  
        # gather_feedback(st)
        streamlit_feedback(
            **feedback_kwargs, key=f"feedback_{st.session_state.count}"
        )
        increment_counter()
st.button('End Conversation', on_click=clear_cache)

if name == "main":
main()`

The feedback gives "Your app is having trouble loading the streamlit_feedback.streamlit_feedback component." inconsistancy

I've noticed that the feedback item has been giving intermittent errors. Sometimes they work great, but other times they give errors with no apparent changes.

Sometimes some will load and other will flash as the placeholder is loading:
image
Notive how the top message has loaded the feedback component correctly and the other two have loading placeholder.

then after a few seconds (about 10 seconds), those placeholders will turn into these warning:
image

Other times the feedback doesn't show up at all..
This is the code that takes the messages and loads them into the UI. this function is called at the main.py script and called every time that script is reloaded:

def show_chat_messages() -> None:
    """
    Displays the session_state.messages containing the dialog between the user and assistant.
    Assumes that the session has already loaded in messages
    TODO add more metadata to the messages
    """
    for idx, message in enumerate(st.session_state[SessionKeys.MESSAGES.value]):
        with st.chat_message(message.role):
            if message.is_redacted:
                st.write("_content not saved_")
            else:
                st.write(message.content)

        # show feedback for every assistant message
        if message.role == "assistant":
            # send feedback
            if st.session_state[SessionKeys.MESSAGES.value]:
                feedback = streamlit_feedback(
                    feedback_type="faces",
                    optional_text_label="(Optional) Please provide any feedback you have",
                    key=f"feedback-msg-{idx}",
                )
                submit_feedback(
                    feedback,
                    st.session_state.get(SessionKeys.USER_ID.value),
                    st.session_state.get(SessionKeys.CONVERSATION.value),
                    message.message_id,
                )
                ```

Am I just overloading the app with too many calls to feedback?

Permanent None feedback

Hey, I tried to implement the streamlit feedback but unfortunately whenever trying get the feedback (clicking on the face emoji) I get None as the response. I know that this could be related to states in streamlit but I couldn't debug this on my own.

My code

def chat(self):
        if not st.session_state["openai_api_key"]:
            st.warning("Please add your OpenAI API key to continue.", icon="โš ๏ธ")
            st.stop()
        else:
            # gets the query from the user and answers using OpenAI API
            self.listen_and_answer_query()
            self.get_feedback()

@staticmethod
    def get_feedback(feedback_type='faces'):
        feedback = streamlit_feedback(
            feedback_type=feedback_type,
            align="flex-start",
            key=Assistant.generate_unique_key("tmp")
        )
        print(feedback)
        # this never runs because feedback is always None
        if feedback:
            Assistant.log_feedback(feedback, feedback_type)
            
@staticmethod
    def generate_unique_key(user_name):
        # Combine user name, message, and current timestamp to create a unique key
        timestamp = datetime.now().strftime("%Y%m%d%H%M%S%f")  # Using microseconds for uniqueness
        key = f"{user_name}_{timestamp}"
        return key

I specifically use the feedback as the last step in my app so I don't see any point which could always change the state before the feedback is logged. Also if I comment out the function listen_and_answer_query I still get None value everytime I click on an emoji.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.