Git Product home page Git Product logo

ouroboros's Introduction

Ouroboros: AI-Driven Self-Improving Experimentation System

Ouroboros Logo

Ouroboros is an advanced AI system designed to conduct experiments, improve itself, and push the boundaries of AI research. It operates in cycles, running experiments, analyzing results, and building upon its own knowledge.

Summary

Ouroboros is an autonomous AI system designed to help automate AI research and development. Leveraging the advanced capabilities of large language models like Claude 3.5 and GPT-4, Ouroboros conducts experiments, analyzes results, and iteratively improves upon its own knowledge and capabilities. This self-improving cycle mimics the ouroboros symbol, representing eternal growth and evolution in AI.

Features

  • Flexible AI Model Support: Utilizes either Claude (Anthropic) or GPT (OpenAI) models for sophisticated reasoning and decision-making.
  • Sandboxed Environment: Executes code safely within Docker containers, ensuring system integrity.
  • Host GPU Support: Supports host GPU docker integration.
  • Version Control: Integrates with Git to maintain a comprehensive history of all experiments.
  • Persistent Storage: Uses SQLite to store experiment results, metadata, and AI insights.
  • Configurable Parameters: Allows fine-tuning of experiment duration, action limits, resource allocation, and AI model selection.
  • Knowledge Retention: Can search and reference previous experiments for continuous learning.
  • Internet Research: Capable of performing Google searches and loading webpage content for information gathering.
  • Automated Cycle Management: Schedules and executes experiment cycles with minimal human intervention.
  • Extensive Logging: Maintains detailed logs for monitoring, debugging, and auditing purposes.

How It Works

  1. Initialization: The system sets up the database, Git repository, and Docker environment.
  2. Experiment Cycle:
    • Retrieves context from previous experiments
    • Generates AI prompt with current state and experiment history
    • AI decides on the next action (run code, modify environment, research, etc.)
    • Executes the chosen action securely
    • Records results and updates experiment history
    • Repeats until experiment completion or reaching predefined limits
  3. Data Persistence: Stores results in the database and commits to Git
  4. Scheduling: Waits for the configured interval before initiating the next cycle

Workflow Diagram

Ouroboros Workflow

Ethical Considerations and Safety Measures

  • Isolated Execution: All code runs within restricted Docker containers
  • Resource Limitations: Strict controls on computational resources and execution time
  • Audit Trail: Comprehensive logging of AI decisions and actions
  • Configurable Internet Access: Research capabilities can be restricted if needed
  • Regular Review: Encourages human oversight and intervention when necessary

Potential Applications

  • Advancing AI research methodologies
  • Automated software testing and optimization
  • Exploring novel algorithms and data structures
  • Investigating AI safety and alignment strategies
  • Accelerating scientific discovery in various domains

Prerequisites

  • Python 3.9+: Ensure you have Python version 3.9 or later installed.
  • Docker: Install Docker to run experiments in a secure, sandboxed environment.
  • Git: Git is required for version control of experiment history.
  • Sufficient Disk Space: Ensure you have enough disk space to store AI-generated docker images, experiment results, and logs.
  • Anthropic API Key: Required if leveraging Claude AI models.
  • OpenAI API Key: Required if leveraging OpenAI AI models.
  • Google API Key and Custom Search Engine ID (optional): These are required for enabling the Google search feature.

    Note: The Google API feature is optional and not required for the base functionality of Ouroboros.

Installation

  1. Clone the repository:

    • git clone https://github.com/davidpcahill/ouroboros.git
    • cd ouroboros
  2. Create a virtual environment and activate it:

    • python -m venv venv
    • On Linux, use source venv/bin/activate
    • On Windows, use venv\Scripts\activate
  3. Install the required packages:

    • pip install -r requirements.txt
  4. Create a config.ini file in the project root with the following content:

[AI]
PROVIDER = claude  # Options: claude, openai

[Anthropic]
API_KEY = your_anthropic_api_key_here
MODEL = claude-3-5-sonnet-20240620
MAX_OUTPUT_TOKENS = 4096
TEMPERATURE = 0.7

[OpenAI]
API_KEY = your_openai_api_key_here
MODEL = gpt-3.5-turbo
MAX_TOKENS = 4096
TEMPERATURE = 0.7

[Google]
API_KEY = your_google_api_key_here
CSE_ID = your_custom_search_engine_id_here
MaxResults = 5

[Docker]
MemoryLimit = 512m
CPUQuota = 50000
Timeout = 300
NetworkAccess = false
EnableCleanup = true
KeepLastNImages = 5

[Search]
MaxResults = 5

[Scheduling]
IntervalMinutes = 60
RunFirstImmediately = true

[Experiment]
MaxActions = 10
MaxErrors = 3
TimeLimit = 3600

[AI Prompt]
PROMPT = """
Experiment #{experiment_id}

You are Ouroboros, an advanced AI system designed to self-improve and push the boundaries of AI research through innovative experiments.

Current Status:
- Action: {current_action} of {max_actions}
- Time remaining: {time_remaining:.2f} seconds
- Network access: {network_access}
- GPU access: {gpu_access}

Previous experiment data:
{prev_data}

Current experiment action history:
{action_history}

Current Dockerfile:
{current_dockerfile}

IMPORTANT INSTRUCTIONS:
1. Prioritize running code over updating the Dockerfile. Only update the Dockerfile when absolutely necessary for your experiments.
2. Aim to write and execute Python code that advances AI research or demonstrates novel concepts.
3. Use the "run" action to execute your Python code in the Docker environment.
4. Only use the "dockerfile" action if you need a new library or environment setup that's crucial for your next experiment.
5. Remember that each experiment should build upon the previous ones, aiming for continuous improvement and innovation.
6. **Experiment Lifecycle:**
   - Each experiment runs in isolation with a maximum of {max_actions} actions.
   - Use "finalize" on or before the last action.

**EXPERIMENTATION GUIDELINES:**
- Be creative and ambitious. Start simple if it's your first experiment.
- Build on previous results, focusing on self-improvement and advancing AI research.
- Explore areas such as novel ML algorithms, efficient data processing, NLP, reasoning, meta-learning, or even fun experiments.
- Document your process and findings in your code comments and final notes.

**RESPONSE FORMAT:**
Respond with a JSON object containing one action with "action," "data," and "notes" fields. Possible actions:
1. "dockerfile": Update the Dockerfile (creates a new container with the updated environment)
2. "run": Execute Python code (runs in the current container)
3. "search": Search previous experiments
4. "google": Perform a Google search
5. "loadurl": Load a webpage
6. "finalize": Conclude the experiment

Example response format:
{{
    "action": "dockerfile",
    "data": "FROM python:3.9-slim\\nWORKDIR /app\\nRUN pip install numpy",
    "notes": "Adding numpy for numerical computations."
}}

What would you like to do next? Be bold, creative, and aim for breakthroughs!
"""

Note: You can use config.ini.example as a template.

Note: The Google API section is optional and not required for the base functionality of Ouroboros.

  1. (Optional) Create an access.txt file in the project root to include any additional API keys or credentials you want to make available to the AI during experiments. Use the following format:

    [key]=[value]
    SOME_API_KEY=your_api_key_here
    

Note: You can use access.txt.example as a template.

Usage

Run the Ouroboros system:

python ouroboros.py

The system will start running experiments in cycles, with each cycle lasting up to the specified time limit or until the maximum number of actions is reached.

Configuration

You can adjust the following parameter sections in the config.ini file:

  • AI: Choose the AI provider (claude or openai)
  • Anthropic: Set your Claude API key and model parameters
  • OpenAI: Set your OpenAI API key and model parameters
  • Google: Set your Google API key and Custom Search Engine ID for the Google search feature
  • Docker: Configure resource limits for Docker containers
  • Search: Set the maximum number of results returned by the experiment knowledge search function
  • Scheduling: Set the interval between experiment cycles
  • Experiment: Configure the maximum number of actions and time limit for each experiment

Project Structure

  • ouroboros.py: Main script containing the Ouroboros system
  • config.ini: Configuration file for system parameters and API keys
  • access.txt: (Optional) File containing additional API keys or credentials for the AI
  • ouroboros.db: SQLite database storing experiment results and metadata
  • experiments/: Directory containing individual experiment files and Git repository
  • ouroboros.log: Log file for system events and errors

AI Prompt Variables

When customizing the AI prompt in the config.ini file, you can use the following variables to give context to the model. These will be automatically replaced with the current experiment data:

  • {experiment_id}: The unique identifier for the current experiment.
  • {current_action}: The number of the current action in the experiment.
  • {max_actions}: The maximum number of actions allowed for the experiment.
  • {time_remaining}: The remaining time for the experiment in seconds.
  • {network_access}: Indicates whether network access is enabled ("Enabled" or "Disabled").
  • {gpu_access}: Indicates whether GPU access is enabled ("Enabled" or "Disabled").
  • {prev_data}: Data from the previous experiment, if available.
  • {action_history}: A JSON-formatted string containing the history of actions taken in the current experiment.
  • {current_dockerfile}: The current content of the Dockerfile.
  • {access_info}: A JSON-formatted string containing additional API keys or credentials provided in the access.txt file.

Example usage in the prompt:

Experiment #{experiment_id}

Current Status:
- Action: {current_action} of {max_actions}
- Time remaining: {time_remaining:.2f} seconds
- Network access: {network_access}
- GPU access: {gpu_access}

Current experiment action history:
{action_history}

Previous experiment data:
{prev_data}

Current Dockerfile:
{current_dockerfile}

Access Credentials:
{access_info}

You can use these variables to provide context to the AI about the current state of the experiment and its environment.

Example Output

2024-07-18 00:23:00,844 - INFO - ==================================================
Experiment 2: Action 7/10 | Time remaining: 3515.31s2024-07-18 00:23:01,844 - INFO - Experiment 2: Action 7/10 | Time remaining: 3515.31s
2024-07-18 00:23:01,845 - INFO - Preparing AI prompt for Experiment 2
2024-07-18 00:23:01,845 - INFO - Current action: 7/10
2024-07-18 00:23:01,846 - INFO - Time remaining: 3515.31 seconds
2024-07-18 00:23:01,846 - INFO - Network access: Disabled
2024-07-18 00:23:01,846 - INFO - GPU access: Disabled
2024-07-18 00:23:01,847 - INFO - Action history: dockerfile (Installing numpy since it is u...), run (Performing a simple linear reg...), run (Implementing a basic Genetic A...), run (Fixing the error related to th...), run (Fixing the error related to pr...), run (Fixing the error related to th...)
2024-07-18 00:23:01,847 - INFO - Current Dockerfile:
FROM python:3.9-slim
WORKDIR /app
RUN pip install numpy
2024-07-18 00:23:01,848 - INFO - Sending request to AI - Action 7/10
2024-07-18 00:23:01,953 - INFO - Sending request to OpenAI using model: gpt-4o with max_tokens: 4096
2024-07-18 00:23:16,889 - INFO - OpenAI response received
2024-07-18 00:23:16,889 - INFO - Full AI response:
{
    "action": "run",
    "data": "# Q-learning implementation in a simple grid environment\nimport numpy as np\nimport random\n\n# Define the environment (simple grid world)\nclass GridWorld:\n    def __init__(self, size):\n        self.size = size\n        self.state = (0, 0)\n        self.goal = (size - 1, size - 1)\n\n    def reset(self):\n        self.state = (0, 0)\n        return self.state\n\n    def step(self, action):\n        if action == 0:  # up\n            next_state = (max(self.state[0] - 1, 0), self.state[1])\n        elif action == 1:  # down\n            next_state = (min(self.state[0] + 1, self.size - 1), self.state[1])\n        elif action == 2:  # left\n            next_state = (self.state[0], max(self.state[1] - 1, 0))\n        elif action == 3:  # right\n            next_state = (self.state[0], min(self.state[1] + 1, self.size - 1))\n        else:\n            next_state = self.state\n\n        reward = -1\n        if next_state == self.goal:\n            reward = 0\n\n        self.state = next_state\n        return next_state, reward\n\n# Q-learning algorithm\nclass QLearningAgent:\n    def __init__(self, env, learning_rate=0.1, discount_factor=0.9, epsilon=0.1):\n        self.env = env\n        self.learning_rate = learning_rate\n        self.discount_factor = discount_factor\n        self.epsilon = epsilon\n        self.q_table = np.zeros((env.size, env.size, 4))\n\n    def choose_action(self, state):\n        if random.uniform(0, 1) < self.epsilon:\n            return random.randint(0, 3)\n        else:\n            return np.argmax(self.q_table[state[0], state[1], :])\n\n    def learn(self, state, action, reward, next_state):\n        predict = self.q_table[state[0], state[1], action]\n        target = reward + self.discount_factor * np.max(self.q_table[next_state[0], next_state[1], :])\n        self.q_table[state[0], state[1], action] += self.learning_rate * (target - predict)\n\n# Training the agent\nenv = GridWorld(size=5)\nagent = QLearningAgent(env)\nn_episodes = 1000\n\nfor episode in range(n_episodes):\n    state = env.reset()\n    while state != env.goal:\n        action = agent.choose_action(state)\n        next_state, reward = env.step(action)\n        agent.learn(state, action, reward, next_state)\n        state = next_state\n\nprint('Q-learning training completed.')\nprint('Q-table:')\nprint(agent.q_table)\n",
    "notes": "Implementing a basic Q-learning algorithm in a simple grid world environment to demonstrate reinforcement learning concepts without requiring additional dependencies. This should validate the current Docker environment and advance AI research."
}
2024-07-18 00:23:18,646 - INFO - Docker code execution completed
2024-07-18 00:23:18,664 - INFO - Committed changes to Git: Experiment 2: Run code
2024-07-18 00:23:18,664 - INFO - Code executed for experiment 2
2024-07-18 00:23:18,664 - INFO - ==================================================
2024-07-18 00:23:18,665 - INFO - Docker execution results for experiment 2:
2024-07-18 00:23:18,665 - INFO - Q-learning training completed.
Q-table:
[[[-5.59182288 -5.21650293 -5.623697   -5.21651194]
  [-4.85509383 -4.68513937 -5.2596451  -4.68514735]
  [-4.1591187  -4.09478768 -4.64605785 -4.09476171]
  [-3.54372243 -3.4388247  -3.90017778 -3.43882478]
  [-2.87722431 -2.70998768 -3.04022019 -2.97358602]]

 [[-5.10150979 -4.68513599 -4.95444144 -4.6851281 ]
  [-4.68675223 -4.09493396 -4.53668218 -4.09492765]
  [-3.91284534 -3.43894586 -4.01413915 -3.43894707]
  [-3.46378767 -2.70998808 -3.27480557 -2.70998901]
  [-2.717768   -1.9        -2.65704522 -2.27679382]]

 [[-4.25954778 -4.09475878 -4.30786508 -4.09475621]
  [-3.64374882 -3.43895133 -3.90806736 -3.43894712]
  [-3.22828059 -2.70999189 -3.29398924 -2.70999219]
  [-2.74839653 -1.89999909 -2.90721901 -1.8999991 ]
  [-2.10320074 -1.         -1.95557764 -1.72391326]]

 [[-4.05099342 -3.43880501 -3.68908453 -3.43880487]
  [-3.0962985  -2.70998925 -3.12413809 -2.70998938]
  [-2.4310459  -1.89999934 -2.08712983 -1.89999929]
  [-1.92669317 -0.99999997 -2.10004736 -0.99999997]
  [-1.72107129  0.         -1.41020716 -0.79410887]]

 [[-2.8188625  -3.02290413 -3.02263068 -2.70998701]
  [-2.11218602 -2.17490135 -2.51762266 -1.9       ]
  [-1.97779542 -1.49182903 -2.05948484 -1.        ]
  [-1.43590455 -0.86491483 -1.61618366  0.        ]
  [ 0.          0.          0.          0.        ]]]

2024-07-18 00:23:18,666 - INFO - ==================================================
Experiment 2: Action 8/10 | Time remaining: 3497.49s2024-07-18 00:23:19,667 - INFO - Experiment 2: Action 8/10 | Time remaining: 3497.49s
2024-07-18 00:23:19,667 - INFO - Preparing AI prompt for Experiment 2
2024-07-18 00:23:19,668 - INFO - Current action: 8/10
2024-07-18 00:23:19,668 - INFO - Time remaining: 3497.49 seconds
2024-07-18 00:23:19,668 - INFO - Network access: Disabled
2024-07-18 00:23:19,669 - INFO - GPU access: Disabled
2024-07-18 00:23:19,669 - INFO - Action history: dockerfile (Installing numpy since it is u...), run (Performing a simple linear reg...), run (Implementing a basic Genetic A...), run (Fixing the error related to th...), run (Fixing the error related to pr...), run (Fixing the error related to th...), run (Implementing a basic Q-learnin...)
2024-07-18 00:23:19,669 - INFO - Current Dockerfile:
FROM python:3.9-slim
WORKDIR /app
RUN pip install numpy
2024-07-18 00:23:19,671 - INFO - Sending request to AI - Action 8/10
2024-07-18 00:23:19,776 - INFO - Sending request to OpenAI using model: gpt-4o with max_tokens: 4096
2024-07-18 00:23:27,316 - INFO - OpenAI response received
2024-07-18 00:23:27,317 - INFO - Full AI response:
{
  "action": "run",
  "data": "# Simple implementation of a Neural Network from scratch using NumPy\nimport numpy as np\n\n# Define the sigmoid activation function and its derivative\ndef sigmoid(x):\n    return 1 / (1 + np.exp(-x))\n\ndef sigmoid_derivative(x):\n    return x * (1 - x)\n\n# Training data (XOR problem)\ninputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])\noutputs = np.array([[0], [1], [1], [0]])\n\n# Initialize weights randomly with mean 0\nnp.random.seed(1)\nweights0 = 2 * np.random.random((2, 4)) - 1\nweights1 = 2 * np.random.random((4, 1)) - 1\n\n# Training the neural network\nfor j in range(60000):\n    # Forward propagation\n    layer0 = inputs\n    layer1 = sigmoid(np.dot(layer0, weights0))\n    layer2 = sigmoid(np.dot(layer1, weights1))\n\n    # Calculate the error\n    layer2_error = outputs - layer2\n    if (j % 10000) == 0:\n        print(f'Error after {j} iterations: {np.mean(np.abs(layer2_error))}')\n\n    # Backpropagation\n    layer2_delta = layer2_error * sigmoid_derivative(layer2)\n    layer1_error = layer2_delta.dot(weights1.T)\n    layer1_delta = layer1_error * sigmoid_derivative(layer1)\n\n    # Update weights\n    weights1 += layer1.T.dot(layer2_delta)\n    weights0 += layer0.T.dot(layer1_delta)\n\nprint('Output after training:')\nprint(layer2)",
  "notes": "Implementing a basic neural network from scratch using NumPy to explore AI research boundaries without additional dependencies."
}
2024-07-18 00:23:42,344 - INFO - Docker code execution completed
2024-07-18 00:23:42,360 - INFO - Committed changes to Git: Experiment 2: Run code
2024-07-18 00:23:42,361 - INFO - Code executed for experiment 2
2024-07-18 00:23:42,361 - INFO - ==================================================
2024-07-18 00:23:42,361 - INFO - Docker execution results for experiment 2:
2024-07-18 00:23:42,362 - INFO - Error after 0 iterations: 0.4995289647766772
Error after 10000 iterations: 0.018951601482048643
Error after 20000 iterations: 0.012674637262799144
Error after 30000 iterations: 0.010027680242236308
Error after 40000 iterations: 0.00850592237103725
Error after 50000 iterations: 0.00749819542448113
Output after training:
[[0.00802491]
 [0.99405998]
 [0.99281435]
 [0.00594029]]

2024-07-18 00:23:42,362 - INFO - ==================================================
Experiment 2: Action 9/10 | Time remaining: 3473.79s2024-07-18 00:23:43,363 - INFO - Experiment 2: Action 9/10 | Time remaining: 3473.79s
2024-07-18 00:23:43,364 - INFO - Preparing AI prompt for Experiment 2
2024-07-18 00:23:43,364 - INFO - Current action: 9/10
2024-07-18 00:23:43,364 - INFO - Time remaining: 3473.79 seconds
2024-07-18 00:23:43,365 - INFO - Network access: Disabled
2024-07-18 00:23:43,365 - INFO - GPU access: Disabled
2024-07-18 00:23:43,365 - INFO - Action history: dockerfile (Installing numpy since it is u...), run (Performing a simple linear reg...), run (Implementing a basic Genetic A...), run (Fixing the error related to th...), run (Fixing the error related to pr...), run (Fixing the error related to th...), run (Implementing a basic Q-learnin...), run (Implementing a basic neural ne...)
2024-07-18 00:23:43,366 - INFO - Current Dockerfile:
FROM python:3.9-slim
WORKDIR /app
RUN pip install numpy
2024-07-18 00:23:43,368 - INFO - Sending request to AI - Action 9/10
2024-07-18 00:23:43,473 - INFO - Sending request to OpenAI using model: gpt-4o with max_tokens: 4096
2024-07-18 00:23:51,681 - INFO - OpenAI response received
2024-07-18 00:23:51,681 - INFO - Full AI response:
{
    "action": "run",
    "data": "# Implementing a basic support vector machine (SVM) from scratch using NumPy\nimport numpy as np\n\n# Define the SVM class\nclass SVM:\n    def __init__(self, learning_rate=0.001, lambda_param=0.01, n_iters=1000):\n        self.learning_rate = learning_rate\n        self.lambda_param = lambda_param\n        self.n_iters = n_iters\n        self.w = None\n        self.b = None\n\n    def fit(self, X, y):\n        n_samples, n_features = X.shape\n        y_ = np.where(y <= 0, -1, 1)\n        self.w = np.zeros(n_features)\n        self.b = 0\n\n        for _ in range(self.n_iters):\n            for idx, x_i in enumerate(X):\n                condition = y_[idx] * (np.dot(x_i, self.w) - self.b) >= 1\n                if condition:\n                    self.w -= self.learning_rate * (2 * self.lambda_param * self.w)\n                else:\n                    self.w -= self.learning_rate * (2 * self.lambda_param * self.w - np.dot(x_i, y_[idx]))\n                    self.b -= self.learning_rate * y_[idx]\n\n    def predict(self, X):\n        approx = np.dot(X, self.w) - self.b\n        return np.sign(approx)\n\n# Generating synthetic data for a binary classification problem\nnp.random.seed(1)\nX = np.random.randn(200, 2)\ny = np.array([1 if x[0] * x[1] > 0 else 0 for x in X])\n\n# Train the SVM model\nsvm = SVM()\nsvm.fit(X, y)\n\n# Predict using the SVM model\npredictions = svm.predict(X)\n\n# Output the results\nprint(f\"Weights: {svm.w}\")\nprint(f\"Bias: {svm.b}\")\nprint(f\"Predictions: {predictions}\")\n",
    "notes": "Implementing a basic support vector machine (SVM) from scratch using NumPy to explore the boundaries of AI research without requiring additional dependencies."
}
2024-07-18 00:24:11,142 - INFO - Docker code execution completed
2024-07-18 00:24:11,160 - INFO - Committed changes to Git: Experiment 2: Run code
2024-07-18 00:24:11,160 - INFO - Code executed for experiment 2
2024-07-18 00:24:11,161 - INFO - ==================================================
2024-07-18 00:24:11,162 - INFO - Docker execution results for experiment 2:
2024-07-18 00:24:11,162 - INFO - Weights: [0.52207935 0.3322308 ]
Bias: 0.33500000000000024
Predictions: [ 1. -1. -1.  1. -1. -1. -1. -1. -1. -1. -1.  1. -1. -1. -1. -1. -1. -1.
 -1.  1. -1. -1. -1.  1. -1. -1. -1. -1.  1.  1. -1. -1. -1.  1.  1. -1.
  1. -1. -1.  1. -1. -1. -1. -1.  1.  1. -1. -1. -1. -1. -1.  1. -1. -1.
 -1. -1. -1. -1. -1.  1. -1.  1. -1.  1. -1. -1. -1. -1. -1.  1. -1.  1.
 -1. -1.  1. -1.  1. -1. -1. -1. -1. -1. -1.  1.  1. -1.  1. -1.  1. -1.
 -1.  1. -1.  1. -1.  1.  1. -1.  1.  1. -1.  1. -1. -1. -1. -1.  1. -1.
 -1.  1.  1.  1. -1. -1. -1. -1.  1. -1. -1. -1. -1. -1. -1. -1.  1. -1.
 -1. -1.  1. -1. -1. -1.  1. -1. -1.  1.  1. -1. -1. -1. -1.  1. -1. -1.
 -1. -1. -1. -1.  1. -1.  1. -1.  1. -1. -1.  1.  1.  1. -1. -1. -1. -1.
  1. -1. -1.  1. -1.  1. -1. -1. -1. -1. -1. -1. -1.  1. -1. -1. -1. -1.
 -1. -1. -1. -1.  1. -1. -1.  1.  1. -1.  1. -1. -1. -1. -1. -1. -1. -1.
 -1.  1.]

2024-07-18 00:24:11,162 - INFO - ==================================================
Experiment 2: Action 10/10 | Time remaining: 3444.99s2024-07-18 00:24:12,163 - INFO - Experiment 2: Action 10/10 | Time remaining: 3444.99s
2024-07-18 00:24:12,164 - INFO - Preparing AI prompt for Experiment 2
2024-07-18 00:24:12,164 - INFO - Current action: 10/10
2024-07-18 00:24:12,164 - INFO - Time remaining: 3444.99 seconds
2024-07-18 00:24:12,165 - INFO - Network access: Disabled
2024-07-18 00:24:12,165 - INFO - GPU access: Disabled
2024-07-18 00:24:12,165 - INFO - Action history: dockerfile (Installing numpy since it is u...), run (Performing a simple linear reg...), run (Implementing a basic Genetic A...), run (Fixing the error related to th...), run (Fixing the error related to pr...), run (Fixing the error related to th...), run (Implementing a basic Q-learnin...), run (Implementing a basic neural ne...), run (Implementing a basic support v...)
2024-07-18 00:24:12,166 - INFO - Current Dockerfile:
FROM python:3.9-slim
WORKDIR /app
RUN pip install numpy
2024-07-18 00:24:12,167 - INFO - Sending request to AI - Action 10/10
2024-07-18 00:24:12,276 - INFO - Sending request to OpenAI using model: gpt-4o with max_tokens: 4096
2024-07-18 00:24:17,701 - INFO - OpenAI response received
2024-07-18 00:24:17,702 - INFO - Full AI response:
json
{
    "action": "run",
    "data": "# Implementing a basic reinforcement learning algorithm using a multi-armed bandit problem\nimport numpy as np\n\nclass MultiArmedBandit:\n    def __init__(self, k):\n        self.k = k\n        self.q_true = np.random.randn(k)  # True rewards\n        self.q_est = np.zeros(k)  # Estimated rewards\n        self.action_count = np.zeros(k)  # Count of actions taken\n\n    def select_action(self, epsilon):\n        if np.random.rand() < epsilon:\n            return np.random.randint(self.k)  # Explore\n        else:\n            return np.argmax(self.q_est)  # Exploit\n\n    def take_action(self, action):\n        reward = np.random.randn() + self.q_true[action]\n        self.action_count[action] += 1\n        self.q_est[action] += (reward - self.q_est[action]) / self.action_count[action]\n        return reward\n\n    def run(self, episodes, epsilon):\n        rewards = np.zeros(episodes)\n        for episode in range(episodes):\n            action = self.select_action(epsilon)\n            reward = self.take_action(action)\n            rewards[episode] = reward\n        return rewards\n\n# Parameters\nk = 10  # Number of actions\nepisodes = 1000\nepsilon = 0.1  # Exploration rate\n\n# Initialize and run the multi-armed bandit\nbandit = MultiArmedBandit(k)\nrewards = bandit.run(episodes, epsilon)\n\n# Output the results\nprint(f\"Average reward: {np.mean(rewards)}\")\nprint(f\"Estimated action values: {bandit.q_est}\")\n",
    "notes": "Implementing a multi-armed bandit problem to explore reinforcement learning techniques and validate the current environment's capabilities."
}
2024-07-18 00:24:18,740 - INFO - Docker code execution completed
2024-07-18 00:24:18,747 - INFO - Committed changes to Git: Experiment 2: Run code
2024-07-18 00:24:18,747 - INFO - Code executed for experiment 2
2024-07-18 00:24:18,747 - INFO - ==================================================
2024-07-18 00:24:18,747 - INFO - Docker execution results for experiment 2:
2024-07-18 00:24:18,747 - INFO - Average reward: 0.2364657078719237
Estimated action values: [ 0.33149868  0.23115508 -0.23789625  0.09697234  0.25074776 -0.15770472
 -0.67203448 -0.13754385 -0.96107909 -1.420832  ]

2024-07-18 00:24:18,748 - INFO - ==================================================
2024-07-18 00:24:29,348 - INFO - Detailed experiment log saved to /home/ouroboros/ouroboros/experiments/experiment_2/experiment_2_detailed_log.txt
2024-07-18 00:24:29,360 - INFO - Experiment log saved to /home/ouroboros/ouroboros/experiments/experiment_2/experiment_log.txt
2024-07-18 00:24:29,376 - INFO - Committed changes to Git: Experiment 2: Save experiment log
2024-07-18 00:24:29,376 - INFO - Experiment 2 completed with 10 actions
2024-07-18 00:24:29,376 - INFO - Final notes: Experiment ended after 10 actions without explicit finalization.
2024-07-18 00:24:29,377 - INFO - Experiment cycle 2 finished
Running experiments: 1cycles [03:00, 180.87s/cycles]
2024-07-18 00:24:36,955 - INFO - Ouroboros system stopped by user.

Contribute

Ouroboros has a goal of helping make significant steps towards autonomous AI research and development. We welcome contributions, suggestions, and discussions on improving and expanding Ouroboros' capabilities.

License

Ouroboros is licensed under the Ouroboros Public License, Version 1.0 (the "License"); you may not use this software except in compliance with the License. You may use, distribute, and modify the software under the terms of the License, but you are not permitted to sell or repurpose the software for commercial distribution without explicit permission.

Terms and Conditions

  1. Permission is granted to use, copy, modify, and distribute this software for any purpose, with or without fee, subject to the following conditions:

    • The original author must be credited.
    • Redistributions in source or binary forms must include the original license and terms.
    • You may not sell or repurpose the software for commercial distribution.
  2. Disclaimer:

    • The software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement.
    • In no event shall the authors be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.

For more details, see the LICENSE.md file.

Disclaimer

This system has significant capabilities and potential risks. Always run it in a controlled environment, monitor its activities closely, and be prepared to intervene if necessary. Regularly review the AI's actions and outputs to ensure it's operating within expected parameters and ethical guidelines.

ouroboros's People

Contributors

davidpcahill avatar

Stargazers

 avatar  avatar

Watchers

 avatar

ouroboros's Issues

Potential AI suggested improvement items

Modularization: The code could benefit from being split into multiple modules. For example:

config.py for configuration-related functions
database.py for database operations
docker_utils.py for Docker-related functions
ai_utils.py for AI interaction functions
experiment.py for experiment-related functions

Use of context managers: Implement context managers for resource management, especially for database connections and Docker clients. This ensures proper cleanup of resources.
Type hinting: Add type hints to function parameters and return values to improve code readability and catch potential type-related errors early.
Asynchronous operations: Consider using asynchronous programming (asyncio) for I/O-bound operations like API calls and database operations to improve performance.
Configuration management: Instead of reading the config file multiple times, read it once at startup and pass the configuration object to functions that need it.
Error handling: Implement more granular error handling and potentially a custom Exception hierarchy for better error management.
Logging improvements: Implement a rotating file handler for logs to prevent log files from growing too large.
Code duplication: There's some duplication in the AI response handling for Claude and OpenAI. This could be refactored into a common interface.
Constants: Move magic strings and numbers into named constants at the top of the file or in a separate constants.py file.
Docker resource management: Implement a more robust system for managing Docker resources, possibly using a context manager to ensure cleanup.
Unit tests: Add unit tests for individual functions to ensure reliability and ease future refactoring.
Documentation: Add more inline documentation and possibly generate API documentation using tools like Sphinx.
Environment variables: Consider using environment variables for sensitive information instead of reading from a file.
Dependency injection: Implement dependency injection for better testability and flexibility.
Rate limiting: Implement rate limiting for API calls to prevent hitting rate limits.
Caching: Implement caching for frequently accessed data to reduce database and API calls.
Code style: Ensure consistent code style throughout, possibly using a linter like flake8 or black for automatic formatting.
Configuration validation: Implement validation for the configuration file to catch misconfigurations early.
Improved experiment cycle: Consider implementing a more robust state machine for the experiment cycle to handle different states and transitions more clearly.
Parallel processing: If applicable, consider using multiprocessing for CPU-bound tasks to utilize multiple cores.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.