vrsen / agency-swarm Goto Github PK
View Code? Open in Web Editor NEWAn opensource agent orchestration framework built on top of the latest OpenAI Assistants API.
Home Page: https://vrsen.github.io/agency-swarm/
License: MIT License
An opensource agent orchestration framework built on top of the latest OpenAI Assistants API.
Home Page: https://vrsen.github.io/agency-swarm/
License: MIT License
Hello,
I am experiencing the following error in both my production environment and my dev environment:
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Runs in status "expired" do not accept tool outputs.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Here are the relevant entries from higher up on the stack trace.
File "/home/webapp/nice_chat/backend/chatenv/lib/python3.10/site-packages/agency_swarm/threads/thread.py", line 90, in get_completion
self.run = self.client.beta.threads.runs.submit_tool_outputs(
File "/home/webapp/nice_chat/backend/chatenv/lib/python3.10/site-packages/openai/resources/beta/threads/runs/runs.py", line 349, in submit_tool_outputs
return self._post(
This is happening fairly regularly but not always. Please let me know if you need additional information or have any suggestions.
I'm getting the following PermissionError when trying to upload 2 files to my gpt-assistant via my files_folder path during Agent() initialization:
writer = Agent(
name="WRITER",
description="Responsible for generating job proposals based on the given prompt.",
instructions=writer_instructions,
files_folder="examples\\JobProposalGents\\user_files",
tools=[GenerateProposal],
)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'examples\\JobProposalGents\\user_files\\user_overview.md' -> 'examples\\JobProposalGents\\user_files\\user_overview_file-3E9Wyd3c57M8ziItuZep5sH0.md
I'm running this in the Cursor IDE on Windows. I don't have the files open elsewhere and the path length is below Window's MAX_PATH limit. When debugging, the error occurs at line 251 when renaming by adding the openai file id:
def _add_id_to_file(self, f_path, id):
"""Add file id to file name"""
if os.path.isfile(f_path):
file_name, file_ext = os.path.splitext(f_path)
f_path_new = file_name + "_" + id + file_ext
os.rename(f_path, f_path_new)
return f_path_new
else:
raise Exception("Items in files folder must be files.")
Any thoughts on if this is an issue with my setup/environment or the code?
Trying to integrate GROQ in he agency swarm.
I am working on adding the tutorial notebook and ran into an issue. When running the notebook in colab, you can see when you launch gradio it gives an error that the value for share must equal True to launch a public interface, but, colab has some underlying settings that override this and launch the public url anyway.
But, if you try running the notebook with jupyter it doesnt launch the public url. And if you are running the notebook from within a container, it will only launch on 127.0.0.1 from within container which is not accessible. You could enable passing through the ability to set the ip address and launch it to 0.0.0.0, but for my use case it would suffice to simply enable passing through the sharing setting.
I think I will also need this when I get past the notebooks and am running it as a python project as my typical dev environment is a container running on a remote k8s cluster so if I use gradio it will be nice to have this option.
I have already modified and tested updating the demo_gradio function to leave the default value of share=False but allow the share=True option to be passed in when calling agency.demo_gradio(height=900, share=True)
.
I will submit a PR to resolve this issue shortly, I am not sure if you normally allow PR's without an associated issue ticket but in case I wanted to submit this ticket in advance of the PR to provide a clear explanation.
I'm sure you will see the diff in the PR I submit, but if you want to see the reference from my fork its here: afewell@588d671
Thanks!
First of all, thank you for creating and open sourcing this awesome library. Would be awesome if we could define and use local models, maybe through the API feature of "LM Studio" or similar? The API created there is almost identical to the one of OAI.
I need Azure Openai Support because it's faster than official Openai.
I saw the module on threads, is there an example/ function that allows an Agent to converse with multiple participants across multiple threads - this is available in assistants api with creating a new thread_id.
How can we extend this to the Agent object?
The BrowsingAgent seems to be stuck after loading the initial webpage and i received the following
Here is a screenshot from my openai BrowsingAgent
At first I thought it may have to do with the login process, but then i logged in myself and got the Agent to start at step 2 and it encountered the same error.
May I suggest initially making duplicate files AsyncAgent, AsyncBasetool etc and renaming the functions to be async
We dont yet actually have to make them asynchronously efficient, but just use the async mechanism around synchronised code
Later we can go into the methods and make them more efficient, once the interface is defined and proved to be working,
what do others think on this approach?
Enhance Agency Swarm with LiteLLM, LLAVA, and potential Ollama integration for private, cost-effective local AI processing.
Significant functionality boost for local AI processing, enhancing Agency Swarm's versatility and cost-effectiveness.
I created an agency with coding/developer capabilities. I asked it to write code for a web portal to upload files. Agency CEO finally confirms that the task is successful and the web portal development is complete, but I don't see any files nor a localhost URL to test. Can the agents via agency be able to create/save code files? like html, css and js locally to my machine?
🐦 BrowsingAgent 🗣️ @qa Manager
I'm sorry, but I can't perform actions such as logging into websites or handling personal accounts. Is there something else I can assist you with?
🐮 QA Manager 🗣️ @user
It appears that I cannot directly execute the requested operations such as logging into websites or managing personal accounts due to limitations in my current capabilities. Therefore, I'm unable to proceed with the quality assurance testing as initially outlined.
Given this situation, I recommend manually performing the steps provided in your task or exploring alternative methods for achieving the desired testing objectives. If you have any other questions or need further assistance, please feel free to let me know!
👤 User: is this being run through the agency we created?
🐹 GenesisCEO 🗣️ @user
I apologize for any confusion. As previously mentioned, we created an agency folder, "Canine Training and Habit Management," to address the needs of your dogs, Kane and Frank. While the intention was to have specialized agents within this agency to handle various parts of the task, due to a technical issue, we couldn't proceed with the creation of these specialized agents.
However, I am committed to assisting you with this task and will continue to provide you with the necessary systems, advice, documentation, and support to ensure your dogs are trained correctly. Although we cannot leverage the agency in the way initially intended, I am here to walk you through each step, just as an agency with individual agents would.
Let's move on to developing the rest of the training plans. We'll address "Aggressive Play" next for Kane and Frank. I will provide detailed strategies to manage this behavior, just as we would if the agency were fully operational with its specialized agents.
Shall we proceed with addressing "Aggressive Play," or is there another aspect of the training you'd like to focus on first?
👤 User: can we try running the agency again?
🐹 GenesisCEO 🗣️ @user
Certainly! Let's re-initiate the process with the agency we created earlier, "Canine Training and Habit Management." This time, I will address each aspect step by step, ensuring that the agency's structure and purpose are utilized effectively to help manage and train Kane and Frank.
First, I will focus on creating a detailed plan for managing aggressive play behavior in both dogs. Once this plan is developed, I will provide a habit tracking and management system tailored to your needs, followed by documented plans and guides for the training systems.
Let's proceed with setting up the tasks for the agency one by one to avoid any errors. Here's the first task: Addressing 'Aggressive Play' for Kane and Frank by creating a specialized training plan.
Would you like me to start with that?
👤 User: yes
🐹 GenesisCEO 🛠️ Executing Function
Function(arguments='{"instructions":"Create a specialized training plan to address the 'Aggressive Play' behavior displayed by Kane and Frank. The plan should include step-by-step strategies, exercises, and positive reinforcement techniques to encourage appropriate play habits.","message":"Could you develop a comprehensive training plan to address and manage the aggressive play behavior in Kane and Frank? This plan should outline strategies that incorporate positive reinforcement and exercises suitable for each dog's temperament and size.","recipient":"train_system"}', name='SendMessage')
🐹 SendMessage ⚙️Function Output
Error: 1 validation error for SendMessage
recipient
Input should be 'AgentCreator' [type=enum, input_value='train_system', input_type=str]
🐹 GenesisCEO 🛠️ Executing Function
Function(arguments='{"agency_name":"Canine Behavioral Training","agency_chart":"[ceo, [ceo, behavioral_expert], [ceo, tracking_expert], [ceo, documentation_expert], [behavioral_expert, tracking_expert]]","manifesto":"The Canine Behavioral Training agency is tasked with developing a comprehensive system for training and behavior modification for Kane and Frank. The agency will provide in-depth knowledge on dog psychology, devise personalized training plans for various behavioral issues, and create a tracking system to monitor the dogs' progress. Additionally, it will produce detailed documentation to assist the owner in implementing these strategies effectively and ensuring long-term success in managing their dogs' habits."}', name='CreateAgencyFolder')
🐹 CreateAgencyFolder ⚙️Function Output
Agency folder has been created in ./Canine Behavioral Training/.
🐹 GenesisCEO 🗣️ @user
It appears that we are experiencing an issue with running the agents through the multi-tool parallel function, which prevents us from fulfilling the request as planned.
At the moment agency-swarm fails when it tries to use google search. The way it uses the function is correct thought + also the generated URL.
E.g.
https://google.com/search?q=prompt+optimization+API+for+natural+language+processing"}', name='ReadURL')
This is most likely because of GDPR. If you open google in a private browser tab from Europe you will see this:
To make this work it would need to accept the terms.
Hi, @VRSEN, I noticed that it goes through the CoT by default when SendMessage is sent, and then the split subtasks are sent to the receiver one by one. This is probably practical for most task-splitting workflows, but there are still times when I'd like to skip the CoT and customize the message content. I understand that it might be possible to emphasize skipping the CoT in the Instruction when the Agent is instantiated, but I'd still like to use a more deterministic way of controlling the message content. Would it be possible to provide such a parameter like to turn off CoT by default?
Also, I'm developing a MySQL automated ops solution using Agency-Swarm, as I think he's the development framework that better fits my needs. My project is here: DBMA. In the future I'll feed the project development experience and requirements into the Agency-Swarm framework itself.
Hello
I have ran several tests for this and I am unable to get a simple agency created. Not sure if it's due to the folder structure not being picked up properly, but today as I tried again, I noticed something:
When I looked inside my OpenAI assistants page, I saw the created assistants don't have proper instructions sent. I am not sure if it's supposed to look like this? I'm guessing this should have the actual instructions and not just a reference to the file path:
This is after running a clean install of agency swarm.
I asked for a relatively small task just to test it, like so:
I'm not sure this is the correct flow, because only the "GenesisCEO" agent is acting here. Is this the intended behavior? It looked different in your video demo @VRSEN
Also, check out where the agency has been added:
It's been added in the path /agency-swarm/notebooks/(agencyname)/(agencyname)/
Is this the intended behavior? - my guess would have been that the agency would be added to the /agency-swarm/agency/(agencyname) path and the agents (which, none were created) would be added to the /agency-swarm/agency/(agentname) path.
Am I missing something here?
The CEO agent seems to think the agency was successfully created, but it doesn't give me instructions to run the agency.py file even like it's supposed to at the end when the task has been completed.
Any help to get this working correctly is highly appreciated.
Whatever I try, I cannot get the notebook running.
pip install -r requirements.txt
in the agency_swarm directory.Any ideas? I also tried changing the sys.path, but to no avail.
I've tried to run a swarm with a DeveloperAgent to access a JSON file. I initialized the swarm and ran the gradio demo from a colab notebook. Before i started the Gradio demo with the agency I uploaded the files to the DeveloperAgent on the OpenAI assistants API playground as well as turning on retrieval. I realize that's not exactly how 'agency-swarm' is designed to operate however, how can the Assistants API be utilized fully in a swarm?
Has anyone got their agents to read files?
I have experimented with many different swarms, which created a lot of assistants in my OpenAI account. I decided to start utilizing the 'id' parameter when creating the agents to avoid creating duplicates. To do so, I pulled up a commonly used swarm, added the 'id=' parameter to each Agent in the swarm. I also manually deleted all the agents from my previous swarms within the OpenAI Assistants interface.
I next tried to run my swarm Colab notebook and whether the Agents were created with an 'id' tag or not, I always encounter a 404 error as follows:
NotFoundError Traceback (most recent call last)
in <cell line: 3>()
1 from agency_swarm import Agency
2
----> 3 agency = Agency([
4 tool_coordinator,
5 [tool_coordinator, tool_coder],
6 frames
/usr/local/lib/python3.10/dist-packages/agency_swarm/agency/agency.py in init(self, agency_chart, shared_instructions)
41 self._parse_agency_chart(agency_chart)
42 self._create_send_message_tools()
---> 43 self._init_agents()
44 self._init_threads()
45
/usr/local/lib/python3.10/dist-packages/agency_swarm/agency/agency.py in _init_agents(self)
379 agent.id = None
380 agent.add_shared_instructions(self.shared_instructions)
--> 381 agent.init_oai()
382
383 def _init_threads(self):
/usr/local/lib/python3.10/dist-packages/agency_swarm/agents/agent.py in init_oai(self)
102 for assistant_settings in settings:
103 if assistant_settings['name'] == self.name:
--> 104 self.assistant = self.client.beta.assistants.retrieve(assistant_settings['id'])
105 self.id = assistant_settings['id']
106 # update assistant if parameters are different
/usr/local/lib/python3.10/dist-packages/openai/resources/beta/assistants/assistants.py in retrieve(self, assistant_id, extra_headers, extra_query, extra_body, timeout)
137 """
138 extra_headers = {"OpenAI-Beta": "assistants=v1", **(extra_headers or {})}
--> 139 return self._get(
140 f"/assistants/{assistant_id}",
141 options=make_request_options(
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in get(self, path, cast_to, options, stream, stream_cls)
1029 # cast is required because mypy complains about returning Any even though
1030 # it understands the type variables
-> 1031 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
1032
1033 @overload
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in request(self, cast_to, options, remaining_retries, stream, stream_cls)
851 stream_cls: type[_StreamT] | None = None,
852 ) -> ResponseT | _StreamT:
--> 853 return self._request(
854 cast_to=cast_to,
855 options=options,
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
928 err.response.read()
929
--> 930 raise self._make_status_error_from_response(err.response) from None
931
932 return self._process_response(
NotFoundError: Error code: 404 - {'error': {'message': "No assistant found with id 'asst_SCFT4gCy2JTX9VysOnxgGD8G'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Hi,
I get the following error message after several iterations.
UserWarning: name used for saved screenshot does not match file type. It should end with a .png
extension
wd.get_screenshot_as_file(tmpfile.name)
please help.
thanks.
I am on a Macbook Pro M2, OS version Sonoma 14.0
While trying to use the BrowsingAgent, I am consistently getting this error:
Error initializing WebDriver: Message: session not created: Chrome failed to start: exited normally.
(session not created: DevToolsActivePort file doesn't exist)
(The process started from chrome location /Applications/Google Chrome.app/Contents/MacOS/Google Chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
0 chromedriver 0x00000001026c04dc chromedriver + 4162780
1 chromedriver 0x00000001026b8664 chromedriver + 4130404
2 chromedriver 0x000000010230fbc0 chromedriver + 293824
3 chromedriver 0x000000010234175c chromedriver + 497500
4 chromedriver 0x000000010233e128 chromedriver + 483624
5 chromedriver 0x0000000102384c40 chromedriver + 773184
6 chromedriver 0x00000001023496bc chromedriver + 530108
7 chromedriver 0x000000010234a930 chromedriver + 534832
8 chromedriver 0x0000000102685e08 chromedriver + 3923464
9 chromedriver 0x000000010268a3dc chromedriver + 3941340
10 chromedriver 0x000000010266e038 chromedriver + 3825720
11 chromedriver 0x000000010268af3c chromedriver + 3944252
12 chromedriver 0x00000001026606f4 chromedriver + 3770100
13 chromedriver 0x00000001026a7980 chromedriver + 4061568
14 chromedriver 0x00000001026a7af8 chromedriver + 4061944
15 chromedriver 0x00000001026b82e4 chromedriver + 4129508
16 libsystem_pthread.dylib 0x000000018a00f034 _pthread_start + 136
17 libsystem_pthread.dylib 0x000000018a009e3c thread_start + 8
Most of the solutions that I saw on SO point towards certain arguments in chrome_options
which I see have already been added in the selenium.py
file. Not sure how to solve this.
Here is my code snippet:
tools.append(GoogleSearchTool)
tools.append(ShellTool)
tools.append(GitHubTool)
agent_po = Agent(name="agent-po",
tools=tools)
agent_arch = Agent(name="agent-architect",
tools=tools)
agent_dev = Agent(name="agent-developer",
tools=tools)
But when I tried to get them to communicate I got:
👤 User: say hello to the developer
SendMessage(
{"chain_of_thought":"User has requested to send a greeting to the developer. The task at hand simply involves conveying a friendly message from the user to the developer. The appropriate response would be to acknowledge the user's greeting and deliver it to the developer.","recipient":"agent-developer","message":"Hello from the user!"})
)
Error: 1 validation error for SendMessage recipient Input should be 'agent-architect' [type=enum, input_value='agent-developer', input_type=str]
After much debugging I identified this was because they shared the same tools array. The Agent was updating it to add the SendMessage function and this resulted in the above error.
I will submit a pull request for Agent.init()
self.tools = tools[:] if tools is not None else []
Hopefully this will save others some time...
I tried testing out agency-swarm via the collab notebook but I ran into an error. This might be as a result of not having access to gpt-4-1106-preview
models
BadRequestError Traceback (most recent call last)
in <cell line: 3>()
1 from agency_swarm import Agency
2
----> 3 agency = Agency([
4 ceo,
5 [ceo, dev],
6 frames
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
875 # to completion before attempting to access the response text.
876 err.response.read()
--> 877 raise self._make_status_error_from_response(err.response) from None
878 except httpx.TimeoutException as err:
879 if retries > 0:
BadRequestError: Error code: 400 - {'error': {'message': "The requested model 'gpt-4-1106-preview' does not exist.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'model_not_found'}}
Its not ideal that specific model names are being hardcoded into tools. Even if model abstraction is not the first feature on the roadmap, not abstracting the models is a blocker for a number of different potential features, even if only using openai I may want to try a different model for a function, maybe for cost savings, maybe for performance improvement etc... I may want to use Azure which uses deployment rather than model names, and of course I may want to introduce support for other private or oss model providers at some point. Step 1 for any of these would be providing a model abstraction.
Maybe the settings.json should store datas with the api key's MD5 as key ?
File [c:\Python312\Lib\site-packages\agency_swarm\agency\agency.py:342](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:342), in Agency._init_agents(self)
[339](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:339) elif isinstance(agent.files_folder, list):
[340](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:340) agent.files_folder += self.shared_files
--> [342](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:342) agent.init_oai()
[344](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:344) if self.settings_callbacks:
[345](file:///C:/Python312/Lib/site-packages/agency_swarm/agency/agency.py:345) with open(self.agents[0].get_settings_path(), 'r') as f:
File [c:\Python312\Lib\site-packages\agency_swarm\agents\agent.py:137](file:///C:/Python312/Lib/site-packages/agency_swarm/agents/agent.py:137), in Agent.init_oai(self)
...
(...)
[967](file:///C:/Python312/Lib/site-packages/openai/_base_client.py:967) stream_cls=stream_cls,
[968](file:///C:/Python312/Lib/site-packages/openai/_base_client.py:968) )
NotFoundError: Error code: 404 - {'error': {'message': "No assistant found with id 'asst_4f2xgztVgYZdWP54Hq0h2u2K'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Tried running the web_browser_agent.ipynb and ran into the following error:
_openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Request\nbody -> file_ids\n none is not an allowed value (type=type_error.none.not_allowed)', 'type': 'invalid_request_error', 'param': None, 'code': None}}_
Both get_completion() methods defined in agency.py (line 49) and thread.py (line 21) have a default of message_files=None, however, changing that default to an empty list with message_files=[] makes it compatible with the OpenAI specs below and I'm able to successfully converse with the browsing agent:
file_ids
array
Optional
Defaults to []
A list of File IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files.
Not sure though if this will break anything elsewhere as I didn't test this with actually passing files around.
⚡ Researcher invoked function SendMessage
[13:36:50] ERROR An error occurred: Error code: 400 - {'error': {'message': "Can't add messages to thread_mCP3xPaOBOdrCUIULuxV9BBH while a run agency.py:234
run_AZmY4LpOuxQmjU6jmhShBRLB is active.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Happens often for me on an agency like this:
agency = Agency([
ceo,
[ceo, researcher],
[researcher, writer],
[ceo, writer],
[writer, ceo],
[writer, researcher]
], shared_instructions=agency_manifesto)
Hello,
Consider creating a Discussion
tab on the project, so we don't pollute Issues
.
I'll be playing with this as I don't really like AutoGen for building on top of(despite being very cool)
Thank you for your job.
Instead of creating the app, i just get the below response from the CEO. What is going wrong ?
GenesisCEO 🗣️ @user
As your GenesisCEO agent, I'm here to streamline the conceptual planning and coordination of this project. However, actual software development, including coding, deploying, and testing the application, extends beyond the capabilities of real-time dialogue like ours or the tools at my disposal. My role encompasses outlining the project plan, defining requirements, and ensuring clarity between different development aspects for a smooth transition into actual development.
To bring this project to life, follow these steps:
Project Setup:
Assemble a development team that includes backend developers familiar with Python, Flask, SQLAlchemy, and Frontend Developers experienced in React or Vue.js, and experts in UI/UX design.
Set up a project repository using Git for version control.
Backend Development:
Begin with setting up the Flask application and database models according to the outlined schema.
Implement the API endpoints for managing drivers, vehicles, pickups, and routes.
Integrate with the Google Maps API for real-time traffic data and routing functionalities.
Frontend Development:
Use React or Vue.js to create the application's frontend based on the components and interactions outlined.
Integrate the frontend with the backend via API calls to fetch and display data, manage itineraries, and visualize routes on the map.
Testing and Refinement:
Conduct thorough testing for both the backend and frontend. This includes unit testing, integration testing, and user acceptance testing (UAT) to ensure the application meets all requirements and functions smoothly.
Refine the application based on feedback and test results, making necessary adjustments to improve performance and user experience.
Deployment:
Prepare the application for deployment. This includes setting up a production server, configuring the environment, and deploying the application.
Ensure there are strategies in place for maintaining and updating the application, including monitoring and error logging for ongoing support.
Documentation and Training:
Create comprehensive documentation for the application, covering both technical aspects for future development and user guidance for effectively using the application.
Provide training and support for users to familiarize them with the application's functionalities and best practices.
While my assistance with the conceptual and planning phases of this project concludes here, the outlined steps serve as a roadmap for actual development and implementation. Remember, success in software development relies on clear communication, meticulous planning, and iterative refinement based on feedback and testing. Good luck with your project!
Could you please add Google gemini api support.
I've seen quite a few people (including myself) struggling to get the code working despite following what you're doing on a video or the guides. This could be due to any number of environmental factors.
Implementation of this in a container should remove this issue.
Task exception was never retrieved
future: <Task finished name='mwhqygqb27f_1' coro=<Queue.process_events() done, defined at /Users/xx/anaconda3/lib/python3.11/site-packages/gradio/queueing.py:343> exception=1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'data': [[]], 'event_dat...on_hash': 'mwhqygqb27f'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.3/v/missing>
Traceback (most recent call last):
File "/Users/xx/anaconda3/lib/python3.11/site-packages/gradio/queueing.py", line 347, in process_events
client_awake = await self.gather_event_data(event)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xx/anaconda3/lib/python3.11/site-packages/gradio/queueing.py", line 220, in gather_event_data
data, client_awake = await self.get_message(event, timeout=receive_timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xx/anaconda3/lib/python3.11/site-packages/gradio/queueing.py", line 456, in get_message
return PredictBody(**data), True
^^^^^^^^^^^^^^^^^^^
File "/Users/xx/anaconda3/lib/python3.11/site-packages/pydantic/main.py", line 165, in init
pydantic_self.pydantic_validator.validate_python(data, self_instance=pydantic_self)
pydantic_core._pydantic_core.ValidationError: 1
validation error for PredictBody
event_id
Field required [type=missing, input_value={'data': [[]], 'event_dat...on_hash': 'mwhqygqb27f'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.3/v/missing
I am really excited to use this project, I love the design decisions and if all goes well I would like to be a regular user and contributor.
I know in the tutorial notebook it says zapier is optional, but I would really like to learn as much as I can so I thought I would try, and I have never had such a frustrating experience trying to get an API key. I think they have changed their UI as all the bots are wrong, perplexity is wrong, your chatbot is wrong, all the instructions I can find are wrong. There is no option to create an app, their is a section that says apps, but when you create new you can create several things, none of which is an app. And yes I am using the developer platform. The default option to create a zap from the developer platform is an AI workflow that tries to use AI to build your zap, so using a value like "Agency Swarm Integration" as suggested by the Swarm Agent Chatbot doesnt work. There is an option to switch to manually create Zap, but its not even save-able until I configure at least one trigger and 1 action. The NLA Instructions say to connect your account to a custom action, but it is not usable until you select an available existing action from their list, I cannot find any way around it.
I have used a LOT of API's from a lot of sites, gotten a lot of api keys, never had much trouble until zapier, and I know it says the option can be skipped but I think putting something like this which can easily lead to frustration in the introductory tutorial which has a high risk of making people frustrated before they can even try agency swarm, and the problem has nothing to do with agency swarm.
I am not trying to insist that something be changed for my benefit, just offering a suggestion as I think not using the zapier integration in the introductory tutorial and maybe making it a separate example would be a better user experience and create a better assurance that new users have a good experience when first trying out agency swarm.
Thank you!
I've tried to run it several time and keep getting this error.
🐦 BrowsingAgent 🛠️ Executing Function
Function(arguments='{"url":"https://google.com/search?q=social+media+post+creation+and+analytics+API+for+Twitter+and+LinkedIn"}', name='ReadURL')
🐦 ReadURL ⚙️Function Output
Error: Message: Service /Users/vlada/.wdm/drivers/chromedriver/mac64/120.0.6099.109/chromedriver-mac-arm64/chromedriver unexpectedly exited. Status code was: -9
currently the send_message.py module is empty, so just checking to see if for now it's the same class definition as in the previous collab notebook (https://colab.research.google.com/drive/1dhFFpTrdW4F0j355LlBqGP6Dfzitk2JQ?usp=sharing)
also, just want to say I love how this repo is organized and the overall approach. so glad I stumbled upon your youtube channel.
openai==1.3.0
instructor==0.3.4
Please bump at least these ones to the these:
openai==1.5.0
instructor==0.4.5
[E 20:56:09.963 NotebookApp] Notebook JSON is invalid: 'execution_count' is a required property
Failed validating 'required' in code_cell:
On instance['cells'][3]:
{'cell_type': 'code',
'id': 'b3bb4f042e4b3050',
'metadata': {'collapsed': False},
'outputs': ['...0 outputs...'],
'source': 'from agency_swarm import set_openai_key\n'
'set_openai_key("YOUR_OPE...'}
Getting this error when I open the notebook for the first time.
I have this error:
(venv) nicklopacki@Nick-Macbook-Pro CryptoSphereInsight % python3 agency.py Error parsing tool: ModelDevelopmentTool.py Traceback (most recent call last): File "/Users/nicklopacki/Music/agency-swarm/CryptoSphereInsight/agency.py", line 8, in <module> predictiveAnalyst = PredictiveAnalyst() ^^^^^^^^^^^^^^^^^^^ File "/Users/nicklopacki/Music/agency-swarm/CryptoSphereInsight/PredictiveAnalyst/PredictiveAnalyst.py", line 6, in __init__ super().__init__( File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/agency_swarm/agents/agent.py", line 99, in __init__ self._parse_tools_folder() File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/agency_swarm/agents/agent.py", line 364, in _parse_tools_folder raise e File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/agency_swarm/agents/agent.py", line 360, in _parse_tools_folder tool = ToolFactory.from_file(f_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/agency_swarm/tools/ToolFactory.py", line 273, in from_file spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/Users/nicklopacki/Music/agency-swarm/CryptoSphereInsight/PredictiveAnalyst/tools/ModelDevelopmentTool.py", line 7, in <module> class ModelDevelopmentTool(BaseTool): File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py", line 90, in __new__ config_wrapper = ConfigWrapper.for_model(bases, namespace, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/nicklopacki/Music/agency-swarm/venv/lib/python3.11/site-packages/pydantic/_internal/_config.py", line 123, in for_model config_new.update(config_from_namespace) TypeError: 'FieldInfo' object is not iterable
It has to do how it is interacting with pydantic and the FieldInfo specifically but I can't figure out past this. Can anyone help?
Currently try to implement it but noticed in your videos you have already done it, if i get it working ill share what i made in a pull however in the future it might be better to implement a dark mode toggle in the demo_gradio method, ill see if i can in the meantime.
Thanks for the work!
Do you have an discord server for this project?
Problem Statement:
The current implementation of the SolveCaptcha tool within the Agency Swarm's Browsing Agent takes screenshots of all the images within a CAPTCHA during each interaction cycle. This approach is not optimized for "fading CAPTCHAs", where images fade and new ones appear after an interaction. Currently, the tool re-scans all tiles, including those already determined to be irrelevant, leading to unnecessary processing and increased completion time.
Proposed Solution:
Implement an enhancement for the SolveCaptcha tool that intelligently processes only the changed or new images during fading CAPTCHA tasks. The proposed feature would initially scan all tiles, but after the first interaction, it would:
Benefits:
Implementation Considerations:
This feature would significantly enhance the Browsing Agent's ability to interact with complex CAPTCHA systems, streamlining the user experience and supporting more efficient web navigation.
There could be cases where the user may have to give additional files to the assistant apart from the ones that the assistant already has for its functioning. In such scenarios, it would be great if pdf, docx, xlsx etc. could be attached to the user message
Hello! Thanks for creating this!
Following your installations instruction verbatim from your video (https://www.youtube.com/watch?v=qXxO7SvbGs8) on Windows 11 resulted in the following error when running:
pip install .
The error was:
C:\Code\agency-swarm>pip install .
Processing c:\code\agency-swarm
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
Traceback (most recent call last):
File "C:\Users\Bret\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\Bret\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bret\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bret\AppData\Local\Temp\pip-build-env-b81tpqjt\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bret\AppData\Local\Temp\pip-build-env-b81tpqjt\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\Bret\AppData\Local\Temp\pip-build-env-b81tpqjt\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 13, in <module>
File "C:\Users\Bret\AppData\Local\Programs\Python\Python312\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 4: character maps to <undefined>
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
I found a work-around, which is to run the following before that instruction:
set PYTHONUTF8=1
I don't know if this is something that can be fixed, but if so, that might save other Windows users some pain. :-)
I can see that the assistants have been made in my openai account with the correct names and instructions, but appears there is a timeout while still in the create assistants function of openai's package.
This issue is in reference to a previous one that was marked as 'Closed', found HERE. Unfortunately the issue persists with just a slight change. The core of the faulty functionality is in retrieving agents from OpenAI properly so that duplicates of the same agents are not created.
I am assuming that when creating an agent within the agency-swarm framework, the proper way to ensure that duplicates are not created in OpenAI is by using the 'id' parameter. If a different approach is recommended, please advise.
The error is as follows:
NotFoundError Traceback (most recent call last)
in <cell line: 3>()
1 from agency_swarm import Agency
2
----> 3 agency = Agency([
4 tool_coder,
5 [tool_coder, api_researcher]
6 frames
/usr/local/lib/python3.10/dist-packages/agency_swarm/agency/agency.py in init(self, agency_chart, shared_instructions, shared_files)
43 self._parse_agency_chart(agency_chart)
44 self._create_send_message_tools()
---> 45 self._init_agents()
46 self._init_threads()
47
/usr/local/lib/python3.10/dist-packages/agency_swarm/agency/agency.py in _init_agents(self)
395 agent.files_folder += self.shared_files
396
--> 397 agent.init_oai()
398
399 def _init_threads(self):
/usr/local/lib/python3.10/dist-packages/agency_swarm/agents/agent.py in init_oai(self)
94 # load assistant from id
95 if self.id:
---> 96 self.assistant = self.client.beta.assistants.retrieve(self.id)
97 self.instructions = self.assistant.instructions
98 self.name = self.assistant.name
/usr/local/lib/python3.10/dist-packages/openai/resources/beta/assistants/assistants.py in retrieve(self, assistant_id, extra_headers, extra_query, extra_body, timeout)
137 """
138 extra_headers = {"OpenAI-Beta": "assistants=v1", **(extra_headers or {})}
--> 139 return self._get(
140 f"/assistants/{assistant_id}",
141 options=make_request_options(
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in get(self, path, cast_to, options, stream, stream_cls)
1029 # cast is required because mypy complains about returning Any even though
1030 # it understands the type variables
-> 1031 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
1032
1033 @overload
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in request(self, cast_to, options, remaining_retries, stream, stream_cls)
851 stream_cls: type[_StreamT] | None = None,
852 ) -> ResponseT | _StreamT:
--> 853 return self._request(
854 cast_to=cast_to,
855 options=options,
/usr/local/lib/python3.10/dist-packages/openai/_base_client.py in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
928 err.response.read()
929
--> 930 raise self._make_status_error_from_response(err.response) from None
931
932 return self._process_response(
NotFoundError: Error code: 404 - {'error': {'message': "No assistant found with id 'tool_coder'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Thanks in advance for any help!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.