Git Product home page Git Product logo

significant-gravitas / autogpt Goto Github PK

View Code? Open in Web Editor NEW
161.4K 1.6K 42.1K 109.14 MB

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.

Home Page: https://agpt.co

License: MIT License

Python 18.62% Dockerfile 0.05% Batchfile 0.01% Shell 0.07% JavaScript 68.23% HTML 0.11% TypeScript 0.36% CSS 0.02% Jupyter Notebook 8.23% Jinja 0.04% Kotlin 0.01% Ruby 0.05% Swift 0.04% Objective-C 0.01% CMake 0.32% C++ 0.39% C 0.02% Dart 3.44%
ai gpt-4 openai python artificial-intelligence autonomous-agents

autogpt's People

Contributors

andrescdo avatar auto-gpt-bot avatar billschumacher avatar coditamar avatar collijk avatar cs0lar avatar csolar avatar drikusroor avatar dschonholtz avatar erik-megarad avatar hunteraraujo avatar k-boikov avatar kcze avatar kinance avatar lc0rp avatar maiko avatar ntindle avatar p-i- avatar pwuts avatar richbeales avatar rihp avatar sadmuphin avatar silennaihin avatar sweetlilmre avatar swiftyos avatar taytay avatar torantulino avatar tymec avatar waynehamadi avatar wladastic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogpt's Issues

The original link is dead

The original git pull link is dead, and this repo doesn't contain the files necessary ie) requirements.txt

Issues encountered while running the project on Windows 10 with Python 3.11

I encountered multiple issues while trying to run your project on Windows 10 with Python 3.11 using Windows CMD.

Missing modules after running the requirements: I was unable to find a compatible version of pywin32, so I installed it manually using the following command:

python -m pip install pywin32
I also had to install the following modules manually:

pip install google
pip install readability-lxml
pip install playsound
pip install python-dotenv
pip install docker

Strange characters in the output: After running the program, I noticed some strange characters like ←[32m and ←[94m in the output. This might be an issue with Windows CMD.

AttributeError after answering the AI goals: After answering the question "Enter up to 5 goals for your AI," I encountered the following error:

File "\Auto-GPT\scripts\main.py", line 204, in
assistant_reply = chat.chat_with_ai(
^^^^^^^^^^^^^^^^^^
File "\Auto-GPT\scripts\chat.py", line 68, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'ChatCompletion'. Did you mean: 'Completion'?

Please help me understand if I missed anything during the setup process and how I can resolve these issues.

I appreciate your support.

Best regards,
Sunil

Error: Invalid JSON

I'm getting this argument in a loop

NEXT ACTION: COMMAND = Error: ARGUMENTS = Invalid JSON
SYSTEM: Command Error: returned: Unknown command Error:
Error: Invalid JSON

API Rate Limit Reached with new key

I just create a new key and it's failing to run:

Continue (y/n): y
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...
Error:  API Rate Limit Reached. Waiting 10 seconds...

image

Support using other/local LLMs

You can modify the code to accept a config file as input, and read the Chosen_Model flag to select the appropriate AI model. Here's an example of how to achieve this:

Create a sample config file named config.ini:

[AI]
Chosen_Model = gpt-4

Offload the call_ai_fuction from the ai_functions.py to a separate library. Modify the call_ai_function function to read the model from the config file:

import configparser

def call_ai_function(function, args, description, config_path="config.ini"):
    # Load the configuration file
    config = configparser.ConfigParser()
    config.read(config_path)

    # Get the chosen model from the config file
    model = config.get("AI", "Chosen_Model", fallback="gpt-4")

    # Parse args to comma separated string
    args = ", ".join(args)
    messages = [
        {
            "role": "system",
            "content": f"You are now the following python function: ```# {description}\n{function}```\n\nOnly respond with your `return` value.",
        },
        {"role": "user", "content": args},
    ]

  # Use different AI APIs based on the chosen model
    if model == "gpt-4":
        response = openai.ChatCompletion.create(
            model=model, messages=messages, temperature=0
        )
    elif model == "some_other_api":
        # Add code to call another AI API with the appropriate parameters
        response = some_other_api_call(parameters)
    else:
        raise ValueError(f"Unsupported model: {model}")

    return response.choices[0].message["content"]

In this modified version, the call_ai_function takes an additional parameter config_path which defaults to "config.ini". The function reads the config file, retrieves the Chosen_Model value, and uses it as the model for the OpenAI API call. If the Chosen_Model flag is not found in the config file, it defaults to "gpt-4".

the if/elif structure is used to call different AI APIs based on the chosen model from the configuration file. Replace some_other_api with the name of the API you'd like to use, and replace parameters with the appropriate parameters required by that API. You can extend the if/elif structure to include more AI APIs as needed.

Active Mode

Summary

I would like to suggest a new feature called "Active Mode," which enables the AI to run while actively prompting the user with chain-of-thought questions when executing each subsequent action. This allows users to actively participate while the AI agent runs, ensuring a human-in-the-loop experience.

Background

Currently, there is a feature called "Continuous Mode," which allows the AI to run without user authorization and is 100% automated. However, this mode is not recommended due to its potential dangers, such as running indefinitely or performing actions that users might not approve.

To mitigate these risks and enhance user engagement, I propose a new feature called "Active Mode."

Feature Description

In "Active Mode," the AI will:

  1. Execute an action.
  2. Prompt the user with a chain-of-thought question related to the action.
  3. Wait for the user's input or approval before proceeding to the next action.

This interaction pattern ensures that users have control over the AI's actions while still benefiting from its capabilities. It also fosters a more collaborative environment between the user and the AI system.

Examples

  1. AI: I am about to generate a summary for this article. Do you have any specific points you would like me to focus on?
  2. AI: I found an interesting dataset that could be useful for your project. Would you like me to analyze it and provide insights?
  3. AI: I have drafted an email response based on your guidelines. Please review and let me know if you would like me to make any changes before sending.

Benefits

  • Enhanced user engagement
  • Reduced risk of AI performing unwanted actions
  • Increased collaboration between the user and the AI
  • Improved user control over AI actions

Risks and Mitigations

  • This feature may slow down the overall AI operation due to the need for user input. However, this trade-off is acceptable, considering the increased control and collaboration it provides.

Request for Comments

I would appreciate feedback from the community on this suggested feature. Please share your thoughts, suggestions, and any potential concerns you may have.

SYSTEM: Command returned: Unknown command

ARTBIZ_GPT THOUGHTS:
REASONING:
CRITICISM:
NEXT ACTION: COMMAND = ARGUMENTS = {}
SYSTEM: Command returned: Unknown command
ARTBIZ_GPT THOUGHTS:
REASONING:
CRITICISM:
NEXT ACTION: COMMAND = ARGUMENTS = {}
SYSTEM: Command returned: Unknown command
ARTBIZ_GPT THOUGHTS:
REASONING:
CRITICISM:
NEXT ACTION: COMMAND = ARGUMENTS = {}
SYSTEM: Command returned: Unknown command
ARTBIZ_GPT THOUGHTS:
REASONING:
CRITICISM:

win 10 error on pip install -r requirements.txt

ERROR: Could not find a version that satisfies the requirement pywin32==227; sys_platform == "win32" (from docker) (from versions: 302, 303, 304, 305, 306)
ERROR: No matching distribution found for pywin32==227; sys_platform == "win32"

ModuleNotFoundError: No module named 'readability' (when run script first time)

Auto-GPT\scripts>python main.py continuous-mode
Traceback (most recent call last):
File "C:\Users\ixliv\Auto-GPT\scripts\main.py", line 3, in
import commands as cmd
File "C:\Users\ixliv\Auto-GPT\scripts\commands.py", line 1, in
import browse
File "C:\Users\ixliv\Auto-GPT\scripts\browse.py", line 4, in
from readability import Document
ModuleNotFoundError: No module named 'readability'

Ability to Navigate Outside Working Directory.

Currently, the best workaround is bringing the desired context into the working directory, but implementing the ability to crawl through files more gracefully would be great. Perhaps for fully autonomous users, this could be an opt-in or out type thing.

SYSTEM: Command read_file returned: Error: Attempted to access outside of working directory.

(This project rocks by the way, great work!)

Improve chunking and chunk handling

AutoGPT keeps exceeding the token limit by like 100 tokens every time I use it before it finishes it's task and can't handle the error
Traceback (most recent call last):
File "/root/Auto-GPT/scripts/main.py", line 199, in
assistant_reply = chat.chat_with_ai(
File "/root/Auto-GPT/scripts/chat.py", line 64, in chat_with_ai
response = openai.ChatCompletion.create(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8215 tokens. Please reduce the length of the messages.

NEXT ACTION: COMMAND = Error: ARGUMENTS = Invalid JSON
SYSTEM: Command Error: returned: Unknown command Error:
Traceback (most recent call last):
File "/root/Auto-GPT/scripts/main.py", line 199, in
assistant_reply = chat.chat_with_ai(
File "/root/Auto-GPT/scripts/chat.py", line 64, in chat_with_ai
response = openai.ChatCompletion.create(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/root/Auto-GPT/scripts/myenv/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8215 tokens. Please reduce the length of the messages.
(myenv) (base) root@DESKTOP-S70O6TN:~/Auto-GPT/scripts#

Support Auto-GPT delegating sub-tasks to other instances of itself

Summary

Allow Auto-GPT to break up work and delegate it to other instances of itself.

Feature Description

Similar to actions such as 'file', or 'browse', I propose we have a 'delegate' module. However, I don't really have much of an idea of how it would work at this point.

Benefits

  • Have an 'overseer' Auto-GPT model backed by GPT4, worker Auto-GPT models backed by self-hosted 'llama.cpp' models which do the grunt-work to reduce cost of operation.
  • No need to try and keep all of the state, history, long-term & short-term memory for everything in one agent. can split it up across several.
  • Better fit the organizational model of the data LLMs are derived from; a "more natural" fit given that this is how most complex organizations are run.
  • May work as a method for delegating to humans in the future

Request for Comments

I would appreciate feedback from the community on this suggested feature. Please share your thoughts, suggestions, and any potential concerns you may have.

Could we please have a Discord?

If there's some intention to encourage community effort in this project, a Discord would be most welcome.

I've been looking at similar ideas (GPT4-assisted AI operating over Dockerized Linux system).

Build in Rate-Limit Error Handling

Below is a readout from an example error.

"I hit a rate limit error working with the OpenAI API: File "D:\projects\auto-gpt\auto-gpt-env\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists."

Traceback (most recent call last):
File "D:\projects\auto-gpt\auto-gpt\scripts\main.py", line 199, in
assistant_reply = chat.chat_with_ai(
File "D:\projects\auto-gpt\auto-gpt\scripts\chat.py", line 80, in chat_with_ai
except openai.RateLimitError:
AttributeError: module 'openai' has no attribute 'RateLimitError'

Would it be possible to throttle the script depending on the model being executed?
https://platform.openai.com/docs/guides/rate-limits/overview

[OSX 13.2.1] No module named 'requests'

Hi!
I'm on a Apple silicon Mac, runny 13.2.1.
Tried the proposed installation process:
git clone https://github.com/Torantulino/Auto-GPT.git $ cd 'Auto-GPT' pip3 install -r requirements.txt mv .env.template .env nano .env -> Filled the API keys and saved

But when I try to run
➜ Auto-GPT git:(master) ✗ python3 scripts/main.py
I get :

Traceback (most recent call last):
File "/Users/eloi/bin/Auto-GPT/scripts/main.py", line 3, in
import commands as cmd
File "/Users/eloi/bin/Auto-GPT/scripts/commands.py", line 1, in
import browse
File "/Users/eloi/bin/Auto-GPT/scripts/browse.py", line 1, in
import requests
ModuleNotFoundError: No module named 'requests'

Any help please?
Thanks!

The model: `gpt-4` does not exist

Hi,
Amazing work you're doing here! ❤
I'm getting this message:

Traceback (most recent call last):
File "/Users/stas/Auto-GPT/AutonomousAI/main.py", line 154, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/Auto-GPT/AutonomousAI/chat.py", line 48, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model: gpt-4 does not exist

Does it mean I don't have an API access to gpt-4 yet? How can I use previous versions?

License Question

Would it be possible to MIT license this repo to fully open up potential for use with opensource language models?

Display short-term and long-term memory usage

Auto-GPT currently pins it's Long-Term memory to the start of it's context window. It is able to manage this through commands.

Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits. e.g memory usage: (2555/4000 tokens)

This may lead to some interesting behaviour where it is less inclined to read long strings of text, or is more meticulous at saving information to long-term-memory when it sees it's running low on tokens.

Auto-GPT Recursive Self Improvement

Idea 💡

The ULTIMATE achievement for this project would be if Auto-GPT was able to recursively improve itself. That, after-all, is how AGI is predicted by many to come about.

Suggestion 👩‍💻

Auto-GPT should be able to:

  • Read it's own code
  • Evaluate it's limitations and areas for improvement
  • Write code to increase it's abilities
  • Write tests for it's code and carry out those tests

Further down the line: 📃

  • Browse it's own code on GitHub
  • Evaluate, find bugs, etc
  • Submit pull requests

Where to start? 🤔

I have previously had success with this system prompt in playground:
image

Prompt

You are AGI_Builder_GPT. Your goal is to write code and make actionable suggestions in order to improve an AI called "Auto-GPT", so as to broaden the range of tasks it's capable of carrying out.

Implement reflection

Context: Implement reflection, a technique that allows generating more coherent and natural texts using pre-trained language models.

Problem or idea: Reflection is based on two articles that propose different methods to incorporate world knowledge and causal reasoning in text generation. The articles are:

  1. ArXiv Article
  2. GitHub Repository

Solution or next step: I would like the Auto-GPT project to include reflection as an option to improve the quality of the generated texts. @Torantulino, what do you think of this idea?

variable 'assistant_thoughts_reasoning' referenced before assignment

After getting it installed (i think) correctly, i'm now getting this response for any goal used:

....scripts/main.py", line 81, in print_assistant_thoughts assistant_thoughts_reasoning) UnboundLocalError: local variable 'assistant_thoughts_reasoning' referenced before assignment
Warning: Failed to parse AI output, attempting to fix.

module 'tiktoken' has no attribute 'encoding_for_model'

Getting this error on OS X, even tho it seems like tiktoken is installed correctly

Traceback (most recent call last):
File "/Users/echo/Auto-GPT/scripts/main.py", line 286, in
assistant_reply = chat.chat_with_ai(
File "/Users/echo/Auto-GPT/scripts/chat.py", line 67, in chat_with_ai
current_tokens_used = token_counter.count_message_tokens(current_context, model)
File "/Users/echo/Auto-GPT/scripts/token_counter.py", line 16, in count_message_tokens
encoding = tiktoken.encoding_for_model(gpt-4)
AttributeError: module 'tiktoken' has no attribute 'encoding_for_model'

Enhance Conversation / Ability to Provide Feedback to the System.

The ability to jump in and steer a train off thought that is running off course is quite helpful.

To this end, adjust the interaction loop in main.py such that the user can jump in to talk.

`# Interaction Loop
while True:
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat.chat_with_ai(
prompt,
user_input,
full_message_history,
mem.permanent_memory,
token_limit)

# Print Assistant thoughts
print_assistant_thoughts(assistant_reply)

# Get command name and arguments
try:
    command_name, arguments = cmd.get_command(assistant_reply)
except Exception as e:
    print_to_console("Error: \n", Fore.RED, str(e))

if not cfg.continuous_mode:
    ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
    # Get key press: Prompt the user to press enter to continue or escape
    # to exit
    user_input = ""
    print("Enter 'y' to authorise command, 'n' to deny command, or any text to communicate with the AI. Type 'EXIT' to exit the program...", flush=True)
    while True:
        console_input = input(Fore.MAGENTA + "Input:" + Style.RESET_ALL)
        if console_input.lower() == "y":
            user_input = "NEXT COMMAND"
            break
        elif console_input.lower() == "n":
            user_input = "DENY COMMAND"
            break
        elif console_input.lower() == "exit":
            user_input = "EXIT"
            break
        else:
            user_input = console_input
            break

    if user_input != "NEXT COMMAND":
        if user_input == "DENY COMMAND":
            print("Command denied.", flush=True)
        elif user_input == "EXIT":
            print("Exiting...", flush=True)
            break

    print_to_console(
        "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
        Fore.MAGENTA,
        "")
else:
    # Print command
    print_to_console(
        "NEXT ACTION: ",
        Fore.CYAN,
        f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}  ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}")


# Exectute command
if command_name.lower() != "error":
    result = f"Command {command_name} returned: {cmd.execute_command(command_name, arguments)}"
else:
    result = f"Command {command_name} threw the following error: " + arguments

# Check if there's a result from the command append it to the message
# history
if result is not None:
    full_message_history.append(chat.create_chat_message("system", result))
    print_to_console("SYSTEM: ", Fore.YELLOW, result)
else:
    full_message_history.append(
        chat.create_chat_message(
            "system", "Unable to execute command"))
    print_to_console("SYSTEM: ", Fore.YELLOW, "Unable to execute command")`

That replacement works relatively well but of course, creates a few more issues in practice:

Name your AI: For example, 'Entrepreneur-GPT'
AI Name: RAVEN
RAVEN here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
RAVEN is: A Coding Assistant
Enter up to 5 goals for your AI: For example: Increase net worth Grow Twitter Account Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: Add 4 + 4
Goal 2: Ask me how I'm doing.
Goal 3:
Error: Prompt file not found
Error: Invalid JSON
4 + 4 equals 8 How are you doing today?
Enter 'y' to authorize command, 'n' to deny command, or any text to communicate with the AI. Type 'EXIT' to exit the program...
Input:I'm good how are you?
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
SYSTEM: Command Error: returned: Unknown command Error:
Error: Invalid JSON
As an AI, I don't have feelings or emotions, but I'm here to help you with any questions or tasks you have. Let me know how I can assist you!
Enter 'y' to authorise command, 'n' to deny command, or any text to communicate with the AI. Type 'EXIT' to exit the program...
Input:

The issues emerging are: Invalid JSON, Prompt File Not Found.
Further, ideally jumping in here, the user can steer GPT in the right direction. "I see that you are searching for term X, by you would have more success searching for term Y, try that."
Then it would ask permission to do that search.

Improve web-crawling system

Auto-GPT should be able to see links on webpages it visits, as well as the existing GPT-3 powered summary, so that it can browse further than 1 page away from google.

API Rate Limit Reached. Waiting 10 seconds

Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...

pip install -r requirements.txt

Running into 2 errors running this cmd

also tried running as pip3

Errors:

ERROR: Cannot install -r requirements.txt (line 3), -r requirements.txt (line 4), -r requirements.txt (line 8) and requests==2.25.1 because these package versions have conflicting dependencies.

The conflict is caused by:
The user requested requests==2.25.1
googlesearch-python 1.1.0 depends on requests==2.25.1
openai 0.27.0 depends on requests>=2.20
docker 6.0.1 depends on requests>=2.26.0

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Unable to install lxml on M1 mac

This install throws a couple different errors.

I tried another suggestion which gives the same result arch -x86_64 pip3 install lxml

python 3.8.9

any ideas?

Building wheel for lxml (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/z9/v8hgfs2527j8t9pv656k580w0000gn/T/pip-install-udz7nb_q/lxml/setup.py'"'"'; __file__='"'"'/private/var/folders/z9/v8hgfs2527j8t9pv656k580w0000gn/T/pip-install-udz7nb_q/lxml/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/z9/v8hgfs2527j8t9pv656k580w0000gn/T/pip-wheel-d6m9houy
       cwd: /private/var/folders/z9/v8hgfs2527j8t9pv656k580w0000gn/T/pip-install-udz7nb_q/lxml/
  Complete output (117 lines):
  Building lxml version 4.9.2.
  Building without Cython.
  Building against libxml2 2.9.4 and libxslt 1.1.29
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.macosx-10.14-arm64-3.8
  creating build/lib.macosx-10.14-arm64-3.8/lxml

Dependency Conflict with Requests in googlesearch-python and tiktoken

Issue:

There is a dependency conflict involving the requests package between googlesearch-python and other packages in the project. The googlesearch-python package has a strict dependency on a specific version of requests, which is causing the conflict. the tiktoken and googlesearch-python packages seem to be conflicting.

Using python 3.10

Details:

The conflict is caused by:

The user requested requests>=2.26.0
openai 0.27.2 depends on requests>=2.20
tiktoken 0.3.3 depends on requests>=2.26.0
docker 2.0.0 depends on requests!=2.11.0, !=2.12.2 and >=2.5.2
googlesearch-python 1.1.0 depends on requests==2.25.1
The user requested requests>=2.26.0
openai 0.27.2 depends on requests>=2.20
tiktoken 0.3.3 depends on requests>=2.26.0
docker 2.0.0 depends on requests!=2.11.0, !=2.12.2 and >=2.5.2
googlesearch-python 1.0.1 depends on requests==2.25.1
The user requested requests>=2.26.0
openai 0.27.2 depends on requests>=2.20
tiktoken 0.3.3 depends on requests>=2.26.0
docker 2.0.0 depends on requests!=2.11.0, !=2.12.2 and >=2.5.2
googlesearch-python 1.0.0 depends on requests==2.24.0

or when trying to pin specific versions:

The conflict is caused by:
    The user requested requests
    openai 0.27.2 depends on requests>=2.20
    tiktoken 0.3.3 depends on requests>=2.26.0
    googlesearch-python 1.1.0 depends on requests==2.25.1

Please let me know if there are any plans to address this issue or if you need any additional information.

Make Auto-GPT aware of it's running cost

Auto-GPT is expensive to run due to GPT-4's API cost.

We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost.

This could also be displayed to the user to help them be more aware of exactly how much they are spending.

Save Long-Term-Memory to file on shutdown

Implement Save/Load of long-term memory to fine to enable continuation between instances.

Could optionally save current context too.

Note, this would have to be cleared if the user wanted to change Auto-GPT's task.

Auto-discover new capabilities via API's and dynamic run-times.

Create a new command to dynamically expand capabilities.

  1. Identify a capability gap. e.g. if current weather/stock prices cannot be obtained from current Auto-GPT capabilities
  2. Provide a static list of known API libraries (or discover via google search).
  3. Find an API for the required capability.
  4. Create a python command in-memory to call the discovered API schema.
  5. Call a sub-agent to impersonate the in-memory API python command.
  6. Ask the sub-agent to return a response for the new capability.

Some example API libraries:

The model: `gpt-4` does not exist

I get the following error:

File "C:\Users\PranavTej\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model: gpt-4 does not exist

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.