camel-ai / camel Goto Github PK
View Code? Open in Web Editor NEW๐ซ CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
Home Page: https://www.camel-ai.org
License: Apache License 2.0
๐ซ CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
Home Page: https://www.camel-ai.org
License: Apache License 2.0
It is important for the users to learn the basic concepts and the core modules of CAMEL. I made a markdown tutorial for TextPrompt
with the help of ChatGPT. Here is how I prompt it:
Make a markdown tutorial for the `TextPrompt` class based on its pytest functions:
{the `TextPrompt` class}
{its pytest functions}
ChatGPT generated a draft for the tutorial then I did some cleanup and editing. Although it is not perfect, it greatly accelerated the process. Almost every core modules already have pytest functions implemented. We can use a similar approach to generate tutorials for other modules. Let me know if you want to help out.
Note that: the pytest functions can be found here, after we create the markdown file (*.md
), we can put it under docs/get_started and add it to docs/index.rst. Make sure to test if it works as expected before you make it pull request following the instructions of Building Documentation.
Here are some important modules we can work on:
Thanks!
I cant run the script , mentioning that i'm a newbie and please help
iloading@DESKTOP-CPIUK2P:/mnt/c/Users/user/Desktop/camel-master/camel-master/examples/ai_society$ python3 role_playing.py
Traceback (most recent call last):
File "/mnt/c/Users/user/Desktop/camel-master/camel-master/examples/ai_society/role_playing.py", line 3, in
from camel.agents import RolePlaying
ModuleNotFoundError: No module named 'camel'
The current EmbodiedAgent
and HuggingFaceToolAgent
just realize a very basic functionality. More features can be added to extend the action space.
No response
No response
No response
(camel) C:\Users\joshu\Documents\_aipython\_useful\CAMEL agnet\camel\camel\agents>python role_playing.py
Traceback (most recent call last):
File "C:\Users\joshu\Documents\_aipython\_useful\CAMEL agnet\camel\camel\agents\role_playing.py", line 8, in <module>
from .chat_agent import ChatAgent
ImportError: attempted relative import with no known parent package
any help would be good.
master/HEAD
Any
Human agent uses input() as core functionality which breaks the pureness of the package. All interaction must go via the API.
No response
No response
No response
Would love to be able to use local LLMS like Alpaca and Llama.
master/HEAD
examples\ai_society\role_playing_with_critic.py
The Python snippets:
Command lines:
python examples\ai_society\role_playing_with_critic.py
Exception has occurred: AttributeError (note: full exception trace is shown but execution is paused at: __getattribute__)
'ChatMessage' object has no attribute '__deepcopy__'
File "C:\Users\dmitrii\_KAUST\git\camel\camel\messages\base.py", line 105, in __getattribute__ (Current frame)
return super().__getattribute__(name)
File "C:\Users\dmitrii\_KAUST\git\camel\camel\agents\critic_agent.py", line 168, in step
input_msg = copy.deepcopy(meta_chat_message)
File "C:\Users\dmitrii\_KAUST\git\camel\camel\agents\role_playing.py", line 238, in process_messages
processed_msg = self.critic.step(messages)
File "C:\Users\dmitrii\_KAUST\git\camel\camel\agents\role_playing.py", line 274, in step
user_msg = self.process_messages(user_response.msgs)
File "C:\Users\dmitrii\_KAUST\git\camel\examples\ai_society\role_playing_with_critic.py", line 57, in main
assistant_response, user_response = role_play_session.step(
File "C:\Users\dmitrii\_KAUST\git\camel\examples\ai_society\role_playing_with_critic.py", line 84, in <module>
main()
AttributeError: 'ChatMessage' object has no attribute '__deepcopy__'
No response
No response
word_limit
Now RolePlaying makes a hard stop if max token length is reached. The else branch of
if num_tokens < self.model_token_limit:
is problematic for users. message_window_size
is not a good solution since it makes a user guess how many messages may fit into the context and they may not fit anyway and ruin the roleplay.
We can give user the choice of:
model_token_limit
. If needed must truncate the user
and assistant
messages by dropping the middle or the tail of a message. If needed it must even truncate the system message in the same manner.message_window_size
fuctionality must be removed. Hard stop must be not be issued for option 1.
No response
No response
The prompt in the project contains<YOU_ SOLUTION>information, but no alternative logic was found in the code.
The ability to call tools would enhance CAMEL
's ability and be handy for developers. Here are some tools we prioritize to integrate. Some of them are suggested by GPT-4
:
Just had a try at http://agents.camel-ai.org/
But what the model spits out contains <br>
html tag which makes them a lot harder to use
Thank! Your work is amazing, it opens up a lot of possibilities, especially in eduacting onself in learning something new w/o knowing much but just a couples ideas.
This work deserve more attention!
I got the single_agent.py to work. No problems using conda Python 3.10.10. or another Python 3.10.7 environment But I did not get further. I get the following error trying out e.g. role_playing_multiprocess.py:
role_playing_multiprocess.py, line 162, in main
array_idx = int(os.environ.get('SLURM_ARRAY_TASK_ID'))
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
Am I missing something?
Traceback (most recent call last):
File "/Users/jonchui/code/chatgpt/camel/camelMe.py", line 37, in <module>
main(args.prompt, args.assistant, args.user)
File "/Users/jonchui/code/chatgpt/camel/camelMe.py", line 24, in main
(assistant_msg, _, _), (user_msg, _, _) = role_play_session.step(assistant_msg)
File "/Users/jonchui/code/chatgpt/camel/camel/agents/role_playing.py", line 113, in step
raise RuntimeError("User agent is terminated.")
RuntimeError: User agent is terminated.
I'm going to see if CAMEL can fix itself ;)
#!/usr/bin/env python
import os
import openai
openai.api_key = "[REDACTED]"
import time
from colorama import Fore
def print_text_animated(text):
for char in text:
print(char, end="", flush=True)
#time.sleep(0.02)
from camel.agents import RolePlaying
task_prompt = '''
DEBUG and fix this error that keeps happening when I try to run the code:
RuntimeError("User agent is terminated.")
Traceback (most recent call last):
File "/Users/jonchui/code/chatgpt/camel/camelMe.py", line 37, in
main(args.prompt, args.assistant, args.user)
File "/Users/jonchui/code/chatgpt/camel/camelMe.py", line 24, in main
(assistant_msg, _, _), (user_msg, _, _) = role_play_session.step(assistant_msg)
File "/Users/jonchui/code/chatgpt/camel/camel/agents/role_playing.py", line 113, in step
raise RuntimeError("User agent is terminated.")
RuntimeError: User agent is terminated.
last function is
def step(
self,
assistant_msg: ChatMessage,
) -> Tuple[Tuple[ChatMessage, bool, Dict], Tuple[ChatMessage, bool, Dict]]:
user_msgs, user_terminated, user_info = self.user_agent.step(
assistant_msg)
if user_terminated:
raise RuntimeError("User agent is terminated.")
user_msg = user_msgs[0]
user_msg.role = "user"
(assistant_msgs, assistant_terminated,
assistant_info) = self.assistant_agent.step(user_msg)
if assistant_terminated:
raise RuntimeError("Assistant agent is terminated.")
assistant_msg = assistant_msgs[0]
assistant_msg.role = "user"
return (
(assistant_msg, assistant_terminated, assistant_info),
(user_msg, user_terminated, user_info),
)
the github project is here:
https://github.com/lightaime/camel
'''
print(Fore.YELLOW + f"Original task prompt:\n{task_prompt}\n")
role_play_session = RolePlaying("Computer Programmer", "Software Engineer", task_prompt)
print(Fore.CYAN + f"Specified task prompt:\n{role_play_session.task_prompt}\n")
chat_turn_limit, n = 1000, 0
assistant_msg, _ = role_play_session.init_chat()
while n < chat_turn_limit:
n += 1
(assistant_msg, _, _), (user_msg, _, _) = role_play_session.step(assistant_msg)
print_text_animated(Fore.BLUE + f"AI User:\n\n{user_msg.content}\n\n")
print_text_animated(Fore.GREEN + f"AI Assistant:\n\n{assistant_msg.content}\n\n")
if "<CAMEL_TASK_DONE>" in user_msg.content:
break
Hey I'm trying to edit the default assistant/user prompts in camel\agents\prompts\ai_society but I'm getting the error in the attached image:
File "C:\tools\camel\camel\generators.py", line 46, in validate_meta_dict_keys
raise ValueError("The keys of the meta_dict should be in "
ValueError: The keys of the meta_dict should be in set(). Got {'<USER_ROLE>', '', '<ASSISTANT_ROLE>'} instead.
Here are the custom Assistant/User prompt I want to include:
Proteus 4.2 - OmniCompetent Assistant: https://flowgpt.com/prompt/Ku_FbNzQsd_itThT7xhY6
Tech writer/Super AI researcher - Dr. Ada Turing: https://flowgpt.com/prompt/3EB4JHHhJPiP51f9fA9Bf
The PersRubric/Omnicomps in the above are quite interesting and...effective, I want to use Camel to test how effective they actually are by doing a few hundred runs with variations of the above vs default settings.
Subtasks:
isResponseAwaited() -> bool
submitResponse(str) -> None
The current EmbodiedAgent
is not yet integrated into RolePlaying
with other agents yets.
No response
No response
No response
python examples/ai_society/role_playing.py
Traceback (most recent call last):
File "examples/ai_society/role_playing.py", line 3, in
from camel.agents import RolePlaying
ModuleNotFoundError: No module named 'camel'
Would you guys like them?
Now the code execution of Python is using bare exec
, which can cause arbitrary code execution and system crash. Moreover, the value in exec
cannot be captured by outer program, which add difficulty for some agent implementation.
We can use ast
in python to analysis generated code and control the execution of interpreter.
No response
No response
Hello,
The demo that is set up at https://f04f43d534b998d61a.gradio.live/ is currently down.
It only displays the following message: No interface is running right now
thread '' panicked at 'assertion failed: encoder.len() == decoder.len()', src\lib.rs:458:9
note: run with RUST_BACKTRACE=1
environment variable to display a backtrace
File ".\camel\agents\chat_agent.py", line 71, in step
num_tokens = num_tokens_from_messages(openai_messages, self.model)
encoding = tiktoken.encoding_for_model(model.value)
return get_encoding(encoding_name)
enc = Encoding(**constructor())
self._core_bpe = _tiktoken.CoreBPE(mergeable_ranks, special_tokens, pat_str)
Add a dummy class to mock OpenAI API so the OpenAI API is not used during the tests to make them faster and cheaper.
Examples for generating math
and science
datasets are missing
No response
No response
No response
Hello, I noticed that you have Gradio for the demo however can we also get this for local use?
latest
Google Colab notebook
Colab notebook fails to run adding "from colorama import Fore" fixes. Thanks.
The Python snippets:
Command lines:
Extra dependencies:
Steps to reproduce:
No response
No response
No response
0.1.0
3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0] linux
Task files could be downloaded automatically if no exist for examples/ai_society/role_playing_multiprocess.py
and examples/code/role_playing_multiprocess.py
The Python snippets:
python examples/ai_society/role_playing_multiprocess.py
python examples/code/role_playing_multiprocess.py
No response
No response
No response
The current role-playing session usually ends due to 'max_tokens_exceeded'
. To complete more complex tasks, external memory is needed.
It would be helpful for people who want to try the demo out for coding tasks
No response
No response
No response
Followed instructions to try and test the roleplaying example but it just errors out. would appreciate some guidance. Do we need to change or move the examples out of the folder - the instructions appear to suggest you just run the example demo from its location. GPT 4 suggestions were a bust.
PS D:\camel\camel\agent> python .\role_playing.py
Traceback (most recent call last):
File "D:\camel\camel\agent\role_playing.py", line 7, in
from .chat_agent import ChatAgent
ImportError: attempted relative import with no known parent package
PS D:\camel\camel\agent>
The current data schema is implemented with data classes that do not enforce typing checking and data validation at runtime. We can introduce better data validation and settings management by replacing data classes with pydantic
.
Use pydantic dataclasses: https://docs.pydantic.dev/latest/usage/dataclasses/
Make all dataclasses in the project pydantic dataclasses, frozen and ordered.
@dataclass(frozen=True, order=True)
My need extra kwonly=True
.
No response
No response
0.1.0
OS: Windows
Users attempting to use the camel
project on Windows have reported issues with setting the OpenAI API key as an environment variable. The current instructions in the README seem to be more suited for Unix-like environments (like Linux or macOS), causing confusion among Windows users.
camel
project as per the README.export
command in Windows PowerShell or Command Prompt.export
command is not recognized, and the environment variable is not set.examples/ai_society/role_playing.py
) results in errors.The `export` command is not recognized in Windows PowerShell or Command Prompt. Users have to use a different command (`$env:OPENAI_API_KEY = "your-api-key"`) to set the environment variable. This is not currently mentioned in the README.
Windows users should be able to set the OpenAI API key as an environment variable in their terminal, allowing scripts to access the key as needed.
Update the README and any other relevant documentation to include specific instructions for setting environment variables on Windows. This should cover both Command Prompt and PowerShell syntax.
According to your design , wo need to choose the user role and assistantt role first. I think can we not need think what role wo need, the camel can autonomously choose the roles according to the preliminary idea?
We need to clean up code so that the API is easier to study and use. List of required changes:
UserChatMessage
and AssistantChatMessage
AssistantSystemMessage
and UserSystemMessage
role
from ChatMessage
into role_at_backend
role
from ChatMessage
altogetherBaseMessage
, ChatMessage
, SystemMessage
a frozen=True
dataclassdef __getattribute__
from BaseMessage (#168)content
in BaseMessage
with TextPrompt
model
to model_type
mode_type
typos (#166, #169)societies
packageNo response
No response
No response
Now CAMEL
apps only support chatting between two autonomous agents. It is important to support more chat modes for different use cases.
Thanks @marchowardbegins for the discussion #57 (comment)
I also would like the addition of a free, open source LLM model locally for Camel.
โโ-
EDIT May 26, 2023
Perhaps this is an even better alternative than HuggingChat :
Gorilla - LLM with massive connected API !!!!!
https://gorilla.cs.berkeley.edu/
https://twitter.com/intuitmachine/status/1661812484062281744?s=46&t=X6lS_Zp1k_p2yWVr44f_7A
โโ-
My suggestion:
Perhaps HuggingChat v0.2 is a good alternative:
HuggingChat v0.2Making the community's best AI chat models available to everyone.
NEWChat UI is now open sourced on GitHubGitHub repohttps://github.com/huggingface/chat-ui
Current Model
OpenAssistant/oasst-sft-6-llama-30bModelhttps://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor
Dataset
https://huggingface.co/datasets/OpenAssistant/oasst1
Website
https://open-assistant.io/
But because things are changing rapidly in these day and age, perhaps there are already other alternatives !
My application would be to use free open source LLMs instead of the OpenAI LLM that in the practical example that is being showcased here:
Camel + LangChain for Synthetic Data & Market Research
https://www.youtube.com/watch?v=GldMMK6-_-g
COLAB Notebook:
https://colab.research.google.com/drive/1BuudlvBrKBl1bNhMp-aB5uOW62-JlrSY?usp=sharing
Or in the example on your website:
camel_demo.ipynb
https://colab.research.google.com/drive/1AzP33O8rnMW__7ocWJhVBXjKziJXPtim?usp=sharing
master/HEAD
master/HEAD
Running mypy on the code highlights around 30 bugs in the code found my linter.
mypy --namespace-packages -p camel
mypy --namespace-packages -p test
mypy --namespace-packages -p examples
mypy --namespace-packages -p apps
No response
No response
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.