AutoGen Visualized - Build Multi-Agent Apps with Drag-and-Drop Simplicity.
Agentok Studio is a tool built upon AutoGen, a powerful agent framework from Microsoft and a vibrant community of contributors.
We consider AutoGen to be at the forefront of next-generation Multi-Agent Applications technology. Agentok Studio takes this concept to the next level by offering intuitive visual tools that streamline the creation and management of complex agent-based workflows. This simplifies the entire process for creators and developers.
We strive to create a user-friendly tool that generates native Python code with minimal dependencies. Simply put, Agentok Studio is a diagram-based code generator for autogen. The generated code is self-contained and can be executed anywhere as a normal Python program, relying solely on the official pyautogen
library.
Contributions (Issues, Pull Requests, Documentation, even Typo-corrections) to this project are welcome! All contributors will be added to the Contribution Wall.
To quickly explore the features of Agentok Studio, visit https://studio.agentok.ai. While we offer an online deployment of this project, please note that it is not intended for production use. The service level agreement is not guaranteed, and stored data may be wiped due to breaking changes.
After login as Guest or with your OAuth2 account, you can click the Create New Project button to create a new project. The new project comes with a sample workflow. You can click the robot icon flashing on the right bottom to start the conversation.
Due to the limitations of GPT-4 and AutoGen, this simple workflow may not work as expected, but it's a good starting point to understand the basic concepts of Agentic App and Agentok Studio.
For a more in-depth look at the project, please refer to Getting Started.
We made tutorials based on the official notebooks from Autogen repository. You can refer to the original notebook here.
๐ฒ Planned/Working โ Completed ๐ With Issues โญ Out of Scope
Warning
Due to data format incompatibility, the current results have been wiped and need to be re-migrated.
Example | Status | Comments |
---|---|---|
simple_chat | โ | Simple Chat |
auto_feedback_from_code_execution | โ | Feedback from Code Execution |
โญ | This is a feature to be considered to add to flow generation. #40 | |
chess | ๐ฒ | This depends on the feature of importing custom Agent #38 |
compression | โ | |
dalle_and_gpt4v | โ | Supported with app.extensions |
function_call_async | โ | |
function_call | โ | |
graph_modelling_language | โญ | This is out of project scope. Open an issue if necessary |
group_chat_RAG | ๐ | This notebook does not work |
groupchat_research | โ | |
groupchat_vis | โ | |
groupchat | โ | |
hierarchy_flow_using_select_speaker | ๐ฒ | |
human_feedback | โ | Human in the Loop |
inception_function | ๐ฒ | |
โญ | No plan to support | |
lmm_gpt-4v | โ | |
lmm_llava | โ | Depends on Replicate |
MathChat | โ | Math Chat |
oai_assistant_function_call | โ | |
oai_assistant_groupchat | ๐ | Very slow and not work well, sometimes not returning. |
oai_assistant_retrieval | โ | Retrieval (OAI) |
oai_assistant_twoagents_basic | โ | |
oai_code_interpreter | โ | |
planning | โ | This sample works fine, but does not exit gracefully. |
qdrant_RetrieveChat | ๐ฒ | |
RetrieveChat | ๐ฒ | |
stream | ๐ฒ | |
teachability | ๐ฒ | |
teaching | ๐ฒ | |
two_users | โ | The response will be very long and should set a large max_tokens. |
video_transcript_translate_with_whisper | โ | Depends on ffmpeg lib, should brew install ffmpeg and export IMAGEIO_FFMPEG_EXE. Since ffmpeg occupies too much space, the online version has removed the support. |
web_info | โ | |
cq_math | โญ | This example is quite irrelevant to autogen, why not just use OpenAI API? |
Async_human_input | โญ | Need scenario. |
oai_chatgpt_gpt4 | โญ | Fine-tuning, out of project scope |
oai_completion | โญ | Fine-tuning, out of project scope |
The project contains Frontend (Built with Next.js) and Backend service (Built with Flask in Python), and have been fully dockerized.
The easiest way to run on local is using docker-compose:
cp ui/.env.example ui/.env
rm ui/.env.production
docker-compose up -d
You can also build and run the ui and service separately with docker:
docker build -t agentok-api ./api
docker run -d -p 5004:5004 agentok-api
docker build -t agentok-ui ./ui
docker run -d -p 2855:2855 agentok-ui
docker build -t agentok-db ./pocketbase
docker run -d -p 7676:7676 agentok-db
(The default port number 2855 is the address of our first office.)
Railway.app supports the deployment of applications in Dockers. By clicking the "Deploy on Railway" button, you'll streamline the setup and deployment of your application on Railway platform:
- Click the "Deploy on Railway" button to start the process on Railway.app.
- Log in to Railway and set the following environment variables:
PORT
: Please set for each services as2855
,5004
,7676
respectively.
- Confirm the settings and deploy.
- After deployment, visit the provided URL to access your deployed application.
If you're interested in contributing to the development of this project or wish to run it from the source code, you have the option to run the ui and service independently. Here's how you can do that:
-
UI (Frontend)
- Navigate to the ui directory
cd ui
. - Rename
.env.sample
to.env.local
and set the value of variables correctly. - Install the necessary dependencies using the appropriate package manager command (e.g.,
pnpm install
oryarn
). - Run the ui service using the start-up script provided (e.g.,
pnpm dev
oryarn dev
).
- Navigate to the ui directory
-
API (Backend Services)
- Switch to the api service directory
cd api
. - Rename
.env.sample
to.env
,OAI_CONFIG_LIST.sample
toOAI_CONFIG_LIST
, and set the value of variables correctly. - Create virtual environment:
python3 -m venv venv
. - Activate virtual environment:
source venv/bin/activate
. - Install all required dependencies:
pip install -r requirements.txt
. - Launch the api service using command
uvicorn app.main:app --reload --port 5004
.
- Switch to the api service directory
REPLICATE_API_TOKEN
is needed for LLaVa agent. If you need to use this agent, make sure to include this token in environment variables, such as the Environment Variables on Railway.app.
IMPORTANT: For security reasons, the latest version of autogen requires Docker for code execution. So you need to install Docker on your local machine beforehand, or add AUTOGEN_USE_DOCKER=False
to file /api/.env
.
-
PocketBase:
- Switch to the PocketBase directory
cd pocketbase
. - Build the container:
docker build -t agentok-db .
- Run the container:
docker run -it --rm -p 7676:7676 agentok-db
- Switch to the PocketBase directory
Each new commit to the main branch triggers an automatic deployment on Railway.app, ensuring you experience the latest version of the service.
Warning
Changes to Pocketbase project will cause the rebuild and redeployment of all instances, which will swipe all the data.
Please do not use it for production purpose, and make sure you export flows in time.
Once you've started both the ui and api services by following the steps previously outlined, you can access the application by opening your web browser and navigating to:
- ui: http://localhost:2855
- api: http://localhost:5004 (OpenAPI docs served at http://localhost:5004/redoc)
- pocketbase: http://localhost:7676
If your services are started successfully and running on the expected ports, you should see the user interface or receive responses from the api services via this URL.
Contributions are welcome! It's not limited to code, but also includes documentation and other aspects of the project. You can open a GitHub Issue or leave comments on our Discord Server.
This project welcomes contributions and suggestions. Please read our Contributing Guide first.
If you are new to GitHub, here is a detailed help source on getting involved with development on GitHub.
Please consider contributing to AutoGen, as Agentok Studio relies on a robust foundation to deliver its capabilities. Your contributions can help enhance the platform's core functionalities, ensuring a more seamless and efficient development experience for Multi-Agent Applications.
This project uses ๐ฆ๐semantic-release to manage versioning and releases. To avoid too frequent auto-releases, we make it a manual GitHub Action to trigger the release.
To follow the Semantic Release process, we enforced commit-lint convention on commit messages. Please refer to Commitlint for more details.
The project is licensed under Apache 2.0 with additional terms and conditions.