Git Product home page Git Product logo

Comments (15)

agokrani avatar agokrani commented on August 23, 2024 1

No problem at all. You have been doing lot of good work. Shutout to this amazing toolkit. Btw, is there any way to reach you out other than twitter. I don't have twitter, and I might need some help with basically using this websocket properly. I am still new to fastapi and don't know much about it.

from lanarky.

agokrani avatar agokrani commented on August 23, 2024 1

Thanks for the example link. I will give it a deeper look soon. Would you mind having a short call on zoom or google meet. Since I already started using this framework for my product. It would be good if I could get some initial help to set it up. If you don't have time, it's okay. Feel free to ping me on LinkedIn if you have time by any chance. Thanks again.

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

I have a conversational agent designed with custom prompt. If I don't define the type of the agent the agent works somehow, but its not able to give correct answers but I get streaming response. Whereas, if I define the type of the agent to Conversational Agent or Chat Agent. The streaming stops working.

@agokrani is it possible to send some error logs? or is it a silent failure?

from lanarky.

agokrani avatar agokrani commented on August 23, 2024

Hi @ajndkr,

I think I fixed it. Basically the callback that has been written assumes your agent's final answer will always contain the string final Answer:

and looking at the prompt of Conversational Agent there was never the case when the bot will say
Final Answer:
I changed the format instructions in the prompt to say Final Answer: it works. Surprisingly if I remove ai_prefix from the format instructions than the agent just enters infinite loop. Not sure why.

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

I think I fixed it. Basically the callback that has been written assumes your agent's final answer will always contain the string final Answer:

Oh yes! awesome. Apologies for the lack documentation on this. I will keep this issue open to update docs later.

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

Maybe I can set up a discord server for this repo but for now, let’s use the discussions page. If you are facing problems, likely there is someone else with the same questions. A public discussion will be useful for me to redirect other users to our discussion

from lanarky.

agokrani avatar agokrani commented on August 23, 2024

Ok! thanks a lot. Maybe a quick question that I can add here:
langchain_router = LangchainRouter(
langchain_url="/chat",
langchain_object=agent_executor,
streaming_mode=1
)

I have this code for my langchain router and I am adding a websocket connection on top of this router. How can I make this router depend on current user to do the authentication similar to what we have in a normal post request?

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

this is more of a fastapi specific question. authentication is not currently supported by this library but you can check out https://indominusbyte.github.io/fastapi-jwt-auth/advanced-usage/websocket/ for examples.

from lanarky.

simon-ne avatar simon-ne commented on August 23, 2024

Hello, could you please help me?

Im trying to get the streaming to work in FastAPI with the ConversationalChatAgent. This is how i initialize it:

agent = initialize_agent(
    agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools=tools,
    llm=agent_llm,
    memory=memory,
    allowed_tools=[tool.name for tool in tools],
    max_iterations=4,
    agent_kwargs={
        'system_message': prefix,
        'human_message': suffix,
        'output_parser': CustomOutputParser(),
    },
    verbose=True,
    handle_parsing_errors=True,
)

I've tried adding a callback inside my main.py.

@register_streaming_callback("StuffDocumentsChain")
class AsyncStuffDocumentsChainStreamingCallback(AsyncBaseRetrievalQAStreamingCallback):
    """AsyncStreamingResponseCallback handler for StuffDocumentsChain."""
    pass

@register_streaming_callback("AgentExecutor")
class AsyncAgentsStreamingCallback(AsyncAgentsLanarkyCallback, AsyncLLMChainStreamingCallback):
    """AsyncStreamingResponseCallback handler for AgentExecutor."""
    pass

@app.get("/chat")
async def chat(
    sessionID: str, question: str, model: str, aimodel: str, engine: str, namespace: str
) -> StreamingResponse:

    input = UserInput(
        ai_model=aimodel,
        sessionID=sessionID,
        question=question.strip(),
        index=model,
        namespace=namespace
    )

    if engine == "engine_v3":
        chain, inputs = await chat_service.answer_question_vector(input)
        return LanarkyStreamingResponse.from_chain(chain, inputs, media_type="text/event-stream")

    elif engine == "engine_v4":
        agent = await chat_service.answer_question_agent(input)
        return LanarkyStreamingResponse.from_chain(agent, input, media_type="text/event-stream")

When I do this, i get an error saying

KeyError: "<class 'app.main.AsyncAgentsStreamingCallback'> already registered as AgentExecutor"

but I did not register it anywhere. Streaming on engine v3 works, but engine v3 is based on a single stuff documents chain. I need to get the streaming working on engine v4.

Im really not sure how to make this work, since im not very advanced in python and I can't really make much sense from the documentation. Any suggestions would be greatly appreciated

from lanarky.

agokrani avatar agokrani commented on August 23, 2024

Hi @simon-ne,

I think, this is coming from trying to register another call back with a similar name of Agent Executor. I had similar problem when adding custom callbacks. Can you try changing this line

@register_streaming_callback("AgentExecutor") to @register_streaming_callback("CustomAgentExecutor")

and see if it works.

from lanarky.

simon-ne avatar simon-ne commented on August 23, 2024

Thanks for your respond @agokrani.

Unfortunately, When i changed the AbentExecutor to CustomAgentExecutor, it does not work. The error saying 'already registered as AgentExecutor' is gone and the backend request finishes successfuly, but it does not output anything. I'm not sure what to pass to the from_chain function as input. in v3 the input looks like this:

input = {
        "input_documents": docs,
        "human_input": question,
    }

But im sure the v4 format will look different. From what I could gather from the lanarky source files, the input is sent to acall function of the agent executor i suppose. I tried to print what it takes as input when I call it without lanarky and the result was this:

{
    'input': 'test',
    'chat_history': [
        AIMessage(content='Here would be what the AI responded as a final answer', additional_kwargs={}, example=False),
        HumanMessage(content='Here is my question from the past', additional_kwargs={}, example=False),
    ],
    'agent_scratchpad': [],
    'stop': ['\nObservation:', '\n\tObservation:']
}

I've tried multiple combinations of this configuration and all of them resulted in a blank page with no response, but AI agent finished responding successfuly, just no output from the backend. One outlier was the configuration in which I passed to input all of the data you can see in the example above. In this case, the response was an error saynig:
One input key expected got ['agent_scratchpad', 'input']
I'm not sure why that is.

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

hi @simon-ne, the issue is that only zero shot agent is supported by the library. If you read this comment: #96 (comment), the workaround solution is to change the token sequence which triggers streaming for agents. in case of zero shot agents, the sequence is "Final Answer: ". The callback is built around it:

answer_prefix_tokens: list[str] = ["Final", " Answer", ":"]

I will add support for more agents but my schedule is quite busy these days. If you would like to contribute, feel free to open a pull request and we can take it forward from there.

from lanarky.

agokrani avatar agokrani commented on August 23, 2024

You don't need to pass agent_scratchpad manually because the agent passes that as input to itself to continue the loop. If your agent request is finished successfully, then you could try adding verbose=true to your LLM or agentExecutor chain to see what output response you are getting.
There is a chance that you don't see output because of parsing problems, or your agent is stuck in a loop.

from lanarky.

agokrani avatar agokrani commented on August 23, 2024

Hi @ajndkr,

Instead of supporting all the agents why not just make the answer_prefix_token the argument you can pass with callback_kwargs, and definie certain types of prefix token as an ENUM in the schema similar to MessageType. This way people would be able to customize their agents more frequently

from lanarky.

ajndkr avatar ajndkr commented on August 23, 2024

@agokrani please check the new documentation on how you can use the callback handlers: https://lanarky.ajndkr.com/learn/adapters/langchain/fastapi/

I will close this issue for now. please reopen if you'd like to discuss more.

from lanarky.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.