Git Product home page Git Product logo

autogen-ui's Introduction

AutoGen UI

AutoGen UI Screenshot

Experimental UI for working with AutoGen agents, based on the AutoGen library. The UI is built using Next.js and web apis built using FastApi.

Why AutoGen UI?

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve complex tasks. A UI can help in the development of such applications by enabling rapid prototyping and testing and debugging of agents/agent flows (defining, composing etc) inspecting agent behaviors, and agent outcomes.

Note: This is early work in progress.

Note that you will have to setup your OPENAI_API_KEY or general llm config using an environment variable. Also See this article for how Autogen supports multiple llm providers

export OPENAI_API_KEY=<your key>

Getting Started

Install dependencies. Python 3.9+ is required. You can install from pypi using pip.

pip install autogenui .

or to install from source

git clone [email protected]:victordibia/autogen-ui.git
cd autogenui
pip install -e .

Run ui server.

Set env vars OPENAI_API_KEY and NEXT_PUBLIC_API_SERVER.

export OPENAI_API_KEY=<your_key>
autogenui # or with --port 8081

Open http://localhost:8081 in your browser.

To modify the source files, make changes in the frontend source files and run npm run build to rebuild the frontend.

Development

backend - with hot-reload

autogenui --reload

note: the UI loaded by this CLI in a pre-complied version by running the frontend build command show blow. That means if you make changes the frontend code or change the hostname or port the backend is running on the frontend updated frontend code needs to be rebuilt for it to load through this command.

Frontend

cd frontend

Install dependencies

yarn install

Run in dev mode - with hot-reload

export NEXT_PUBLIC_API_SERVER=http://<your_backend_hostname>/api

your_backend_hostname - is the hostname that autogenui is running on e.g. localhost:8081

yarn dev

(Re)build

Remember to install dependencies and set NEXT_PUBLIC_API_SERVER before building.

yarn build

Roadmap

  • FastApi end point for AutoGen. This involves setting up a FastApi endpoint that can respond to end user prompt based requests using a basic two agent format.
  • Basic Chat UI Front end UI with a chatbox to enable sending requests and showing responses from the end point for a basic 2 agent format.
    • Debug Tools: enable support for useful debugging capabilities like viewing
      • # of agent turns per request
      • define agent config (e.g. assistant agent + code agent)
      • append conversation history per request
      • display cost of interaction per request (# tokens and $ cost)
  • Streaming UI Enable streaming of agent responses to the UI. This will enable the UI to show agent responses as they are generated, instead of waiting for the entire response to be generated.
  • Flow based Playground UI
    Explore the use of a tool like React Flow to add agent nodes and compose agent flows. For example, setup an assistant agent + a code agent, click run and view output in a chat window.
    • Create agent nodes
    • Compose agent nodes into flows
    • Run agent flows
  • Explore external integrations e.g. with Flowise

References

@inproceedings{wu2023autogen,
      title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
      author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
      year={2023},
      eprint={2308.08155},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

autogen-ui's People

Contributors

dependabot[bot] avatar janaka avatar victordibia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogen-ui's Issues

Installation Issue

Can you please make a video of how you install and set this up?
I followed the instructions exactly as you have it documented and I get an error trying to start it.
Either a video or some exact commands to get it working would be helpful.
UI looks incredible, but when we can use it, it's not much help.

Question: setting this up?

i have successfully cloned this repo but next I am unaware how to make it run.
i could not gather enough information from the readme.md file so could you please give me a hint on how I can make this web ui run with autogen?

thanks

Can't Load API Key

Even after creating a .env file with my API key, it is not recognized. Is there a way to hard code the API key into the script?
Screenshot 2023-12-11 090445

Ability to run locally

I'm able to run autogen using a config list like
config_list = [
{
"model": "mistral-7b-instruct-v0.1.Q5_0.gguf",#"mistral-instruct-7b", #t
he name of your running model
"api_base": "http://0.0.0.0:5001/v1", #the local address of the api
"api_type": "open_ai",
"api_key": "sk-111111111111111111111111111111111111111111111111", # just
a placeholder
}
]

Which talks to text-generation-Webui which has the openai api emulation turned on.
It would be nice to have a similar way of using a local llm with autogen-ui

Streaming Assistant/user_proxy Interaction in realtime

Is it possible to achieve real-time streaming of Assistant/user_proxy interactions like ChatGPT/Claude? If so, it would greatly enhance user interactivity! Do you have any ideas? I know we can do it by patching some functions in the conversable agent, but I'm looking for a better and more scalable solution. @victordibia @janaka

Tx.

always copy the chatgpt`s response

how can I solve this problem?

`
hello


assistant (to user_proxy):

Hello! How can I assist you today?


user_proxy (to assistant):

Sure, I'd be happy to help. What do you need assistance with?


assistant (to user_proxy):

Hello! How can I assist you today?


user_proxy (to assistant):

It seems like you're trying to mimic my responses. How can I assist you further?


assistant (to user_proxy):

Hello! How can I assist you today?


user_proxy (to assistant):

Hello! As an AI, I don't require assistance. But thank you for asking! How can I assist you instead?

what is the height of the eiffel tower


assistant (to user_proxy):

The height of the Eiffel Tower is approximately 330 meters (1,083 feet) including antennas. However, the height varies slightly depending on the ambient temperature as the metal expands or contracts.


user_proxy (to assistant):

The height of the Eiffel Tower is approximately 330 meters (1,083 feet) including antennas. However, the height varies slightly depending on the ambient temperature as the metal expands or contracts.


assistant (to user_proxy):

That's correct! The Eiffel Tower's height can indeed vary by a few inches depending on the temperature. The iron structure expands and contracts with the heat and cold. This is a common characteristic of metal structures. Is there anything else you would like to know?


user_proxy (to assistant):

I'm glad you found the information accurate! Yes, the Eiffel Tower is a fascinating structure. If you have any other questions or need information on a different topic, feel free to ask!


assistant (to user_proxy):

I'm glad you found the information accurate! Yes, the Eiffel Tower is a fascinating structure. If you have any other questions or need information on a different topic, feel free to ask!


user_proxy (to assistant):

I'm glad you found the information helpful! If you have any other questions, feel free to ask. I'm here to help!

`

following your instructions for setup, its throwing an error.

git clone [email protected]:victordibia/autogen-ui.git
Clonage dans 'autogen-ui'...
The authenticity of host 'github.com (140.82.121.4)' can't be established.
ECDSA key fingerprint is
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com,140.82.121.4' (ECDSA) to the list of known hosts.
[email protected]: Permission denied (publickey).
fatal: Impossible de lire le dépôt distant.

Veuillez vérifier que vous avez les droits d'accès
et que le dépôt existe.

AssertionError: (Deprecated) The autogen.Completion class requires openai<1 and diskcache.

I am currently using autogen-ui in a project and encountered an issue related to the openai library version. My environment uses openai version 1.3.6, but autogen-ui seems to require a version lower than 1.0. Due to specific needs in my project, downgrading the openai library is not a feasible option for me.

original code : AssertionError: (Deprecated) The autogen.Completion class requires openai<1 and diskcache.

I am seeking guidance or a workaround to make autogen-ui compatible with openai version 1.3.6. Has anyone faced a similar issue or can provide insights into modifying the codebase for compatibility? Any help or suggestions would be greatly appreciated.

Support UserProxyAgent with activated human_input_mode

If the human_input_mode is activated, it currently asks for input in the terminal running the fastapi server. It would be great if this requests could be done in the UI, so users can consent to execute Querries or drive the interaction into another direction before ending it.

A few issues (missing dependencies, connection error, python version)

Hello,

thanks for this template! I followed the instructions carefully, but unfortunately i'm having a few issues:

  1. It seems the dependencies require Python >=3.9. I had 3.8 installed but i received this error message:
    ERROR: Package 'autogenui' requires a different Python: 3.8.0 not in '>=3.9'

  2. After switching to 3.9 and trying to launch with autogenui i see that typer seems to be missing from the dependencies:
    ModuleNotFoundError: No module named 'typer'

  3. After installing typer, i notice that autogen itself is missing too

  4. After installing autogen using pip install autogen, i can launch with autogenui, but when trying to send a message i see

  • in the Brower/app: Connection error. Ensure server is up and running.
  • console: INFO: 127.0.0.1:51780 - "OPTIONS /api/generate HTTP/1.1" 400 Bad Request

Edit:

Header Header
OS Win 11
Python 3.9.0
Browser MS Edge Dev

Edit2:

✅ Solution:

Do not install autogen instead of pyautogen accidentally! Also, make sure the practices outlined in the first two answers are followed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.