Git Product home page Git Product logo

Comments (8)

dosubot avatar dosubot commented on May 28, 2024 3

To address your query about enabling function calling for "anthropic.claude-v2" and "anthropic.claude-3-sonnet-20240229-v1:0" or finding alternative models that support such functionality for implementing a multi-document agent architecture, it's essential to ensure model compatibility with the function calling API. The FunctionCallingAgent class from LlamaIndex provides a framework for this, but your specified models must be compatible or adapted to fit this framework.

  1. Model Compatibility: First, verify if your chosen models can integrate with the FunctionCallingAgent. This step is crucial for enabling function calling capabilities.

  2. Adapting Models: If the models do not support function calling out of the box, you may need to adapt them to work with the FunctionCallingAgent framework. This could involve wrapping your models to fit the expected interface for function calling.

  3. Configuring FunctionCallingAgent: Utilize the FunctionCallingAgent by setting up its parameters to customize how your LLM interacts with tools and manages tool calls based on the chat context and user messages.

  4. Integration with ReActAgentWorker: While not explicitly addressed, the principles of the FunctionCallingAgent can be adapted to work within a structured planner agent setup, potentially involving customization to fit the structured planning and execution flow of ReActAgentWorker.

  5. Error Handling and Parallel Calls Management: Implement logic for error handling and managing parallel tool calls effectively, based on the capabilities of your chosen LLMs and the requirements of your application.

For detailed implementation and further guidance, refer to the LlamaIndex documentation and source code, specifically the sections related to the FunctionCallingAgent class and its integration with agent workers. This approach will ensure a seamless operation of your multi-document agent architecture within the constraints and capabilities of the Bedrock LLMs you wish to use.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

from llama_index.

logan-markewich avatar logan-markewich commented on May 28, 2024 1

You need to use the ReactAgentWorker, since the bedrock LLM class doesn't implement the FunctionCallingLLM class

from llama_index.

logan-markewich avatar logan-markewich commented on May 28, 2024 1
from llama_index.core.agent import (
    StructuredPlannerAgent,
    ReActAgentWorker,
)

# create the react agent worker for reasoning
worker = ReActAgentWorker.from_tools(
    [lyft_tool, uber_tool], verbose=True
)

# wrap the worker in the top-level planner
agent = StructuredPlannerAgent(
    worker, tools=[lyft_tool, uber_tool], verbose=True
)

from llama_index.

dosubot avatar dosubot commented on May 28, 2024

Hey @noviljohnson! 😄 Long time no type, huh? I hope you've been doing great since our last chat. 🚀 Let me dive into your query about multi document agents and bedrock llms, and I'll get back to you with some insights in a jiffy!

from llama_index.

noviljohnson avatar noviljohnson commented on May 28, 2024

Hi @logan-markewich,
thank you

it worked with ReactAgentWorker.

but i am getting response sometimes in English and sometimes in different language (maybe French/ Latin/Spanish 😂)
could you tell me why this is happening. Or should i change anything.

This is the output i got

El documento propone Quantized Low Rank Adapters (QLORA) como un método eficiente para ajustar modelos de lenguaje grandes cuantizados (LLM) para tareas específicas. QLORA agrega un pequeño número de parámetros específicos de la tarea a un LLM cuantizado en forma de adaptadores de rango bajo. Esto permite un ajuste eficiente sin volver a entrenar o cambiar el modelo base. Cuantizar los adaptadores hace que la inferencia sea muy rápida y eficiente en memoria en comparación con los adaptadores de precisión completa. Los experimentos muestran que QLOR

But in thought/action plan printing statements i see English

Answer: Based on the summary from the reinforcement_learning tool, some key points of the paper on a standardized reinforcement learning framework for AAM contingency management are:

  • It proposes using reinforcement learning and an MDP formulation for real-time decision making and contingency management in AAM.

  • The MDP is integrated into the AAM-Gym simulation framework to enable rapid prototyping and benchmarking of algorithms.

  • This provides a way to evaluate existing systems and enables future research on RL algorithms for AAM contingency management.

  • The paper includes baseline results and metrics using the framework to demonstrate its capabilities.

thanks😊

from llama_index.

noviljohnson avatar noviljohnson commented on May 28, 2024

hi,
after experimenting for some time, now i am getting all responses in English.
But it is taking 1.5 to 2 min to return the response. Is this normal response time?.
Is there a way to reduce the response time.

Thank you.

from llama_index.

logan-markewich avatar logan-markewich commented on May 28, 2024

The response time depends on the LLM being used, how many LLM calls are being made, and how much text the LLM is writing.

Typically there is
1 LLM call to either call a tool or respond
1 tool call (which could involve 1+ LLM calls)
1 LLM call to either respond using the tool response or call another tool
and so on

from llama_index.

noviljohnson avatar noviljohnson commented on May 28, 2024

Thank you @logan-markewich

from llama_index.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.