Git Product home page Git Product logo

kyegomez / tree-of-thoughts Goto Github PK

View Code? Open in Web Editor NEW
4.2K 51.0 350.0 6.33 MB

Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%

Home Page: https://discord.gg/qUtxnK2NMf

License: Apache License 2.0

Python 97.24% Shell 2.76%
artificial-intelligence chatgpt gpt4 multimodal prompt-engineering deep-learning prompt prompt-learning prompt-tuning

tree-of-thoughts's Introduction

Multi-Modality

Tree of Thoughts Banner

Discord Twitter LinkedIn Facebook Reddit Hacker News Pinterest WhatsApp

Paper link Author's implementation

Introduction

Tree of Thoughts (ToT) is a powerful and flexible algorithm that significantly advances model reasoning by up to 70%. This plug-and-play version allows you to connect your own models and experience superintelligence!

Install

$ pip3 install -U tree-of-thoughts

Usage

import os
from tree_of_thoughts import ToTAgent, MonteCarloSearch
from dotenv import load_dotenv
from swarms import Agent, OpenAIChat

load_dotenv()

# Get the API key from the environment
api_key = os.environ.get("OPENAI_API_KEY")

# Initialize an agent from swarms
agent = Agent(
    agent_name="tree_of_thoughts",
    agent_description="This agent uses the tree_of_thoughts library to generate thoughts.",
    system_prompt=None,
    llm = OpenAIChat(),   
)

# Initialize the ToTAgent class with the API key
model = ToTAgent(
    agent,
    strategy="cot",
    evaluation_strategy="value",
    enable_react=True,
    k=3,
)


# Initialize the MonteCarloSearch class with the model
tree_of_thoughts = MonteCarloSearch(model)

# Define the initial prompt
initial_prompt = """


Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 /  2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:
"""

# Define the number of thoughts to generate
num_thoughts = 1
max_steps = 3
max_states = 4
pruning_threshold = 0.5


# Generate the thoughts
solution = tree_of_thoughts.solve(
    initial_prompt=initial_prompt,
    num_thoughts=num_thoughts,
    max_steps=max_steps,
    max_states=max_states,
    pruning_threshold=pruning_threshold,
    # sleep_time=sleep_time
)

print(f"Solution: {solution}")

ToT with HF LLM

To run Hugging Face Transformers with Tree of Thoughts:

import os
from tree_of_thoughts import ToTAgent, MonteCarloSearch
from dotenv import load_dotenv
from swarms import Agent, HuggingfaceLLM

load_dotenv()

# Get the API key from the environment
api_key = os.environ.get("OPENAI_API_KEY")

# Initialize an agent from swarms
agent = Agent(
    agent_name="tree_of_thoughts",
    agent_description=(
        "This agent uses the tree_of_thoughts library to generate thoughts."
    ),
    system_prompt=None,
    llm=HuggingfaceLLM(
        "EleutherAI/gpt-neo-2.7B",
    ),
)

# Initialize the ToTAgent class with the API key
model = ToTAgent(
    agent,
    strategy="cot",
    evaluation_strategy="value",
    enable_react=True,
    k=3,
)


# Initialize the MonteCarloSearch class with the model
tree_of_thoughts = MonteCarloSearch(model)

# Define the initial prompt
initial_prompt = """


Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 /  2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:
"""

# Define the number of thoughts to generate
num_thoughts = 1
max_steps = 3
max_states = 4
pruning_threshold = 0.5


# Generate the thoughts
solution = tree_of_thoughts.solve(
    initial_prompt=initial_prompt,
    num_thoughts=num_thoughts,
    max_steps=max_steps,
    max_states=max_states,
    pruning_threshold=pruning_threshold,
    # sleep_time=sleep_time
)

print(f"Solution: {solution}")

Basic Prompts

Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they're wrong at any point then they leave. The question is...



################ 2nd ################

Simulate three brilliant, logical experts collaboratively answering a question. Each one verbosely explains their thought process in real-time, considering the prior explanations of others and openly acknowledging mistakes. At each step, whenever possible, each expert refines and builds upon the thoughts of others, acknowledging their contributions. They continue until there is a definitive answer to the question. For clarity, your entire response should be in a markdown table. The question is...


################ ################

Imagine three highly intelligent experts working together to answer a question. They will follow a tree of thoughts approach, where each expert shares their thought process step by step. They will consider the input from others, refine their thoughts, and build upon the group's collective knowledge. If an expert realizes their thought is incorrect, they will acknowledge it and withdraw from the discussion. Continue this process until a definitive answer is reached. Present the entire response in a markdown table. The question is...


################ 2nd ################

Three experts with exceptional logical thinking skills are collaboratively answering a question using a tree of thoughts method. Each expert will share their thought process in detail, taking into account the previous thoughts of others and admitting any errors. They will iteratively refine and expand upon each other's ideas, giving credit where it's due. The process continues until a conclusive answer is found. Organize the entire response in a markdown table format. The question is...
################ 2nd ################


Envision a group of three experts working in unison to tackle a question by employing a tree of thoughts strategy. Each expert will thoroughly explain their line of thinking at every step, while also considering the insights provided by their peers. They will openly recognize any mistakes and build upon the group's shared understanding. This iterative process will continue until a definitive solution is reached. Structure the entire response as a markdown table. The question is...


################ 2nd ################

"Three experts with exceptional logical thinking skills are collaboratively answering a question using the tree of thoughts method. Each expert will share their thought process in detail, taking into account the previous thoughts of others and admitting any errors. They will iteratively refine and expand upon each other's ideas, giving credit where it's due. The process continues until a conclusive answer is found. Organize the entire response in a markdown table format. The task is:

Acknowledgements

Thanks to: Shunyu Yao Princeton University, Dian Yu Google DeepMind, Jeffrey Zhao, Google DeepMind, Izhak Shafran Google DeepMind, Thomas L. Griffiths, Princeton University, Yuan Cao Google DeepMind, Karthik Narasimha, Princeton University for sharing this amazing work with the world!

And, thanks to Phil Wang or Lucidrains for inspiring me to devote myself to open source AI Research

License

Apache

tree-of-thoughts's People

Contributors

adiumene avatar bradegan avatar eltociear avatar goddest avatar kyegomez avatar wout145 avatar yhyu13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tree-of-thoughts's Issues

A* search error

Not sure which paramters the different search algos take had to go by trial and error, the following works but inevitably runs in this error at some point:

import os
from tree_of_thoughts.openaiModels import OpenAILanguageModel
from tree_of_thoughts.treeofthoughts import TreeofThoughts,OptimizedTreeofThoughts
from tree_of_thoughts import MonteCarloTreeofThoughts, TreeofThoughtsBFS,TreeofThoughtsDFS,TreeofThoughtsBEST,TreeofThoughtsASearch
import openai
from dotenv import load_dotenv
load_dotenv()

api_key = os.getenv("OPENAI_API_KEY")
api_model= "gpt-3.5-turbo"

model = OpenAILanguageModel(api_key=api_key, api_model=api_model)

#tree_of_thoughts= TreeofThoughts(model) #search_algorithm)
# Initialize the MonteCarloTreeofThoughts class with the model
#tree_of_thoughts = MonteCarloTreeofThoughts(model)
#tree_of_thoughts = TreeofThoughtsDFS(model)
# tree_of_thoughts = TreeofThoughtsBEST(model)
# tree_of_thoughts = TreeofThoughtsBFS(model)
tree_of_thoughts = TreeofThoughtsASearch(model)
# Note to reproduce the same results from the tree of thoughts paper if not better, 
# craft an 1 shot chain of thought prompt for your task below


initial_prompt = """Envision a group of three experts working in unison to tackle a question by employing a tree of thoughts strategy. 
Each expert will thoroughly explain their line of thinking at every step, while also considering the insights provided by their peers. 
They will openly recognize any mistakes and build upon the group's shared understanding.They will focus on logic and ensure to avoid 
ungrounded statements or conclusions. Think systemically, strategically and creatively, use logic, explore extreme scenarios.
This iterative process will continue until a definitive solution is reached. Structure the entire response as a markdown table. 
The question is: 
    
    
    Which are the strategic implications of generative ai agents being able to consistently provide a correct (meaningful, relevant) 
    answer more than 50% of the time? Use logic, think at scale"""

# Solve a problem with the TreeofThoughts

num_thoughts = 3
max_steps = 3
max_states = 4
pruning_threshold = 0.5



solution = tree_of_thoughts.solve(
    initial_prompt=initial_prompt,
#    num_thoughts=num_thoughts, 
    max_steps=max_steps, 
  #  max_states=max_states, 
    #value_threshold=pruning_threshold,
    pruning_threshold=pruning_threshold,
    # sleep_time=sleep_time
)
print(f"solution: {solution}")

error:

  File ~\...\treeofthoughts.py:267 in reconstruct_path
    path = self.reconstruct_path(came_from, current_state)

TypeError: reconstruct_path() missing 1 required positional argument: 'initial_prompt'

AuthenticationError when Using Tree of Thoughts with OpenAI's API

Hello,

I'm attempting to use your Tree of Thoughts (ToT) library in conjunction with OpenAI's API, but I'm encountering an issue with authentication. When I try to instantiate either the OpenAILanguageModel or OptimizedOpenAILanguageModel class with my API key, I get the following error:

openai.error.AuthenticationError: <empty message>

This is confusing because when I use the same API key in a standalone script to call OpenAI's API directly, it works perfectly fine.

Here's the relevant part of my code (with the API key redacted):

from tree_of_thoughts import OpenAILanguageModel, TreeofThoughts

api_key = "my-api-key"  # redacted
model = OpenAILanguageModel(api_key)
tree_of_thoughts = TreeofThoughts(model, "BFS")

Do you have any idea why this might be happening, or suggestions for how I could troubleshoot this issue? Your guidance would be greatly appreciated.

Thank you in advance for your help.

Best Regards,
Dennis

link to official repo at the top of readme

hi @kyegomez , it is absolutely not cool to close this issue twice without any action. #54 #55

you have to acknowledge that you are not the official implementation, and link to the official implementation at the top of README.md. Otherwise, when you implemented something wrong, people will be misled and think ToT is trash, and jeopardize the reputation of our research.

If you close again, I would have to report to GitHub to close down this content via https://docs.github.com/en/site-policy/content-removal-policies. thanks in advance.

Error: unexpected keyword argument in pipeline example

Hi,

I wanted to run the following examples "pipelinehuggingface.py" (put into the folder "examples"), but I obtained the following error: it doesn't seem to recognize an argument 'search_algorithm' (given in TreeOfThoughts).

Capture d’écran du 2023-06-06 10-47-22

Additional information: I used python3.10.6, and I install tree_of_thoughts with the setup python script.

Best,

Christophe

TypeError: initialize_agent() missing 1 required positional argument: 'tools'

Hi, thank you for this! Question, I keep getting this mistake, any idea of how to fix it?

(base) .../tree-of-thoughts/example.py
Traceback (most recent call last):
File ".../tree-of-thoughts/example.py", line 13, in
model = LangchainCustomLanguageModel(api_key=api_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../tree-of-thoughts/example.py", line 50, in init
self.agent = initialize_agent(llm=model, agent=AgentType.REACT_DOCSTORE, verbose=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: initialize_agent() missing 1 required positional argument: 'tools'

Build failure on macOS

Trying to build this on a macOS with latest OS: MacBook Pro with M2 Max, 64GB RAM

(base) ➜  tree-of-thoughts git:(main) ✗ python3.10 -m pip install -r requirements.txt
Collecting transformers
  Downloading transformers-4.29.2-py3-none-any.whl (7.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.1/7.1 MB 9.4 MB/s eta 0:00:00
Collecting openai
  Using cached openai-0.27.7-py3-none-any.whl (71 kB)
Collecting guidance
  Downloading guidance-0.0.57-py3-none-any.whl (83 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.2/83.2 kB 7.0 MB/s eta 0:00:00
Collecting dotenv
  Using cached dotenv-0.0.5.tar.gz (2.4 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [68 lines of output]
      /opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py:85: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`.
        dist.fetch_build_eggs(dist.setup_requires)
        error: subprocess-exited-with-error

        × python setup.py egg_info did not run successfully.
        │ exit code: 1
        ╰─> [16 lines of output]
            Traceback (most recent call last):
              File "<string>", line 2, in <module>
              File "<pip-setuptools-caller>", line 14, in <module>
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/setuptools/__init__.py", line 2, in <module>
                from setuptools.extension import Extension, Library
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/setuptools/extension.py", line 5, in <module>
                from setuptools.dist import _get_unpatched
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/setuptools/dist.py", line 7, in <module>
                from setuptools.command.install import install
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/setuptools/command/__init__.py", line 8, in <module>
                from setuptools.command import install_scripts
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/setuptools/command/install_scripts.py", line 3, in <module>
                from pkg_resources import Distribution, PathMetadata, ensure_directory
              File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-wheel-4iezor5a/distribute_2bfca3ddfc384e5fbf51cea6de0ed998/pkg_resources.py", line 1518, in <module>
                register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider)
            AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader'
            [end of output]

        note: This error originates from a subprocess, and is likely not a problem with pip.
      error: metadata-generation-failed

      × Encountered error while generating package metadata.
      ╰─> See above for output.

      note: This is an issue with the package mentioned above, not pip.
      hint: See above for details.
      Traceback (most recent call last):
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/installer.py", line 97, in _fetch_build_egg_no_warn
          subprocess.check_call(cmd)
        File "/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
          raise CalledProcessError(retcode, cmd)
      subprocess.CalledProcessError: Command '['/opt/homebrew/opt/[email protected]/bin/python3.10', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/tmp1mpplxyx', '--quiet', 'distribute']' returned non-zero exit status 1.

      The above exception was the direct cause of the following exception:

      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/private/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/pip-install-ujzjlglt/dotenv_9e7b51670df642a68d9715c678ee7aa0/setup.py", line 13, in <module>
          setup(name='dotenv',
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 107, in setup
          _install_setup_requires(attrs)
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
          _fetch_build_eggs(dist)
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 85, in _fetch_build_eggs
          dist.fetch_build_eggs(dist.setup_requires)
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 894, in fetch_build_eggs
          return _fetch_build_eggs(self, requires)
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/installer.py", line 39, in _fetch_build_eggs
          resolved_dists = pkg_resources.working_set.resolve(
        File "/opt/homebrew/lib/python3.10/site-packages/pkg_resources/__init__.py", line 827, in resolve
          dist = self._resolve_dist(
        File "/opt/homebrew/lib/python3.10/site-packages/pkg_resources/__init__.py", line 863, in _resolve_dist
          dist = best[req.key] = env.best_match(
        File "/opt/homebrew/lib/python3.10/site-packages/pkg_resources/__init__.py", line 1133, in best_match
          return self.obtain(req, installer)
        File "/opt/homebrew/lib/python3.10/site-packages/pkg_resources/__init__.py", line 1145, in obtain
          return installer(requirement)
        File "/opt/homebrew/lib/python3.10/site-packages/setuptools/installer.py", line 99, in _fetch_build_egg_no_warn
          raise DistutilsError(str(e)) from e
      distutils.errors.DistutilsError: Command '['/opt/homebrew/opt/[email protected]/bin/python3.10', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/var/folders/_j/gj9symrn0rb44g0xq2m8xvg40000gn/T/tmp1mpplxyx', '--quiet', 'distribute']' returned non-zero exit status 1.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

[notice] A new release of pip is available: 23.0.1 -> 23.1.2
[notice] To update, run: python3.10 -m pip install --upgrade pip

Answer cuts off

Hi, I've tried several prompts but the answer always cuts off. Any idea why that would be?

I don't think it has to do with the tokens limit.. This was the last response:

['The best approach to start a consulting agency to improve the processes a financial company has is to first conduct a thorough analysis of the current processes. This analysis should include identifying areas of opportunity, potential areas of improvement, and any potential risks associated with']

ModuleNotFoundError: No module named 'tree_of_thoughts'

I encounter that error when attempting to follow the instructions in the README.

cd tree-of-thoughts
python3 -m pip install -r requirements.txt
cd tree_of_thoughts
python3 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"

Getting started with Method 1 & 2

Hi,

Method1 doesn't output anything:
python3.9 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS" 2023-05-30 15:11:40,157 - INFO - Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. 2023-05-30 15:11:40,157 - INFO - NumExpr defaulting to 8 threads.

Method2 gives the issue:
ModuleNotFoundError: No module named 'tree_of_thoughts.huggingModel'

Any ideas?

Does example.py actually work?

I was running some experiments, so i started off with the example provided.
I know it's suggested to use gpt-4, but the example uses 3.5 turbo.
The example given consistenly gives out wrong and inconsistent solutions, for example:

"solution: ['One possible solution is:\n\n(6 + 3) * 4 / 2 = 24\n\nExplanation: \n- Start with two numbers that add up to 9 (6 and 3)\n- Multiply them by 4 to get 36\n- Divide by 2 to get 18\n- Multiply by 4 again to get 72\n- Divide by 3 to get 24\n\nThis solution uses all four basic arithmetic operations and only requires one set of parentheses. It also avoids the mistakes made in the rejected solutions, such as using non-integers or repeating numbers.']"

this is the solution of one of the trials, the written equation is said to equate to 24 but it doesnt, then it proceeds to give out an explanation that does equate 24 but is not the same as the one written, plus it says it avoided mistakes that it's making in the solution provided ( like repeating numbers )
Is it supposed to behave that way?
Am i doing something wrong or is there some issue with the code?

gpt4all integration

so I use gpt4all which gets imported as
from langchain.llms import GPT4All

I want to know if I can import from TOT as
from tree_of_thoughts import GPT4ALL

as i use models from GPT4ALL pipeline

bad experience

not work

use 20,20,2,15 and basic arithmetic operations (+-*/) to obtain 24

Problems installing in Google Colab

Cloned the repo to my Google Drive ad did some mods to thee code to run in Google Colab:

%cd /content/drive/MyDrive/tree-of-thoughts/
!python3 -m pip install -r requirements.txt
%cd tree_of_thoughts
!python3 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"

Got this error:

/content/drive/MyDrive/tree-of-thoughts
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 1)) (4.29.2)
Requirement already satisfied: openai in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 2)) (0.27.7)
Requirement already satisfied: guidance in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 3)) (0.0.57)
Collecting dotenv (from -r requirements.txt (line 4))
Using cached dotenv-0.0.5.tar.gz (2.4 kB)
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
/content/drive/MyDrive/tree-of-thoughts/tree_of_thoughts
Traceback (most recent call last):
File "/content/drive/MyDrive/tree-of-thoughts/tree_of_thoughts/treeofthoughts.py", line 6, in
from tree_of_thoughts.openaiModels import OptimizedOpenAILanguageModel
ModuleNotFoundError: No module named 'tree_of_thoughts'

A lot of errors

When I excute this code:

`# !cat /content/tree-of-thoughts/huggingfaceExample.py
from tree_of_thoughts.treeofthoughts import OpenAILanguageModel, GuidanceOpenAILanguageModel, TreeofThoughts, OptimizedOpenAILanguageModel, OptimizedTreeofThoughts, HuggingLanguageModel

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name="gpt2"

model_tokenizer="gpt2tokenizer"

model = HuggingLanguageModel(model_name, model_tokenizer)

class HuggingLanguageModel:

def init(self, model_name):

self.model = AutoModelForCausalLM.from_pretrained(model_name)

self.tokenizer = AutoTokenizer.from_pretrained(model_name)

def generate_text(self, prompt, max_length=50):

inputs = self.tokenizer(prompt, return_tensors="pt")

outputs = self.model.generate(inputs["input_ids"], max_length=max_length)

generated_text = self.tokenizer.decode(outputs[0], skip_special_tokens=True)

return generated_text

Initialize the HuggingLanguageModel with the GPT-2 model

model_name = "gpt2"
model = HuggingLanguageModel(model_name,
model_tokenizer="gpt2",
verbose=True)

#choose search algorithm('BFS' or 'DFS')
search_algorithm = "DFS" # "BFS"

#cot or propose
strategy="cot"

value or vote

evaluation_strategy = "value"

gpt2_model = HuggingLanguageModel(model_name)

tree_of_thoughts= OptimizedTreeofThoughts(model, search_algorithm)

input_problem = "use 4 numbers and basic arithmetic operations (+-*/) to obtain 24"
k = 5
T = 3
b = 5
vth = 0.5
timeout = 10
confidence = 0.8 #cmodel is confident on performance
max_iterations = 40 #tree branh nodes
convergence_threshold = 0.01
convergence_count = 5

solution = tree_of_thoughts.solve(input_problem, k, T, b, vth, timeout, confidence_threshold=confidence, max_iterations=max_iterations, convergence_threshold=convergence_threshold, convergence_count=convergence_count)

#use the solution in your production environment
print(f"solution: {solution}")`

I get a lot of errors:

"
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 189, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None

...

Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error generating thoughts for state: u s e 4 n u m b e r s a n d b a s i c a r i t h m e t i c o p e r a t i o n s ( + - * / ) t o o b t a i n 2 4
Error: num_return_sequences has to be 1 when doing greedy search, but is 5.
result: None
Saving the current tree and metrics.
solution: None

"

What could be the problem and how can it be solved?

This is not tree-of-thoughts framework from the paper..

I want you to honestly consider the damage this fake repo had done to those, like me, that confused it for the real one and were hopelessly confused how, otherwise brilliant researchers, have produced such a negligently idiotic implementation of their own framework. Surely, for every star you got are dozens times more that thought the same, most of which remain decided to give up completely on the entire framework, either because they were horrified by the code upon seeing it or that they made the mistake of trying to make it work. Not one person that read the paper and confused your fake repo for that of the paper will be more informed.

Imagine all those researchers excited to test the real code and use it in their own research only to dismiss the Authors and framework as the makers of this garbage you had ChatGPT autogenerate. All those citations and breakthroughs now taken away just so an attention-seeking troll could get a few stars and forks on Github. Unfathomable.

Receiving invalid_api_key when using api key that I've confirmed to be valid

This code results in an error (invalid_api_key). I've confirmed that my api key is valid by successfully using it in some Swift code after this error occurred.

model = OptimizedOpenAILanguageModel(api_key='omitted', api_model='gpt-4')

search_algorithm = "BFS"

strategy = "cot"

evaluation_strategy = "value"

tree_of_thoughts = TreeofThoughts(model, search_algorithm)

input_problem = "use 4 numbers and basic arithmetic operations (+-*/) to obtain 24"

num_thoughts = 2
max_steps = 3
max_states = 5
value_threshold = 0.5

solution = tree_of_thoughts.solve(input_problem,
num_thoughts=num_thoughts,
max_steps=max_steps,
max_states=max_states,
value_threshold=value_threshold)

print(f"Final solution: {solution}")

ModuleNotFoundError: No module named 'treeofthoughts'

Traceback (most recent call last):
  File "/home/pimania/Dev/AGI/treeOfThoughtsTest/exmaple.py", line 1, in <module>
    from tree_of_thoughts import (
  File "/home/pimania/.local/lib/python3.10/site-packages/tree_of_thoughts/__init__.py", line 1, in <module>
    from treeofthoughts import TreeofThoughts, CustomLanguageModel
ModuleNotFoundError: No module named 'treeofthoughts'

the above error is what I receive when attempting to run the code in README.md, after already having run pip install tree-of-thoughts

Seems that perhaps it should be importing tree-of-thoughts instead of treeofthoughts?

ModuleNotFoundError: No module named 'experiments'

HI! I get this error when trying to run the code:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\tree_of_thoughts_init_.py:2
from experiments.extremely_experimental.prompting.LangChain_model import LangchainCustomLanguageModel

ModuleNotFoundError: No module named 'experiments'

Guidance support

Guidance (https://github.com/microsoft/guidance) is a library that allows you to control LLM's output by using a DSL to guide the model toward the desired output or format. Here's how it could look like, using an example from Guidance's README paired with hypothetical ToT support:

import guidance
from tree_of_thoughts import OpenAILanguageModel
from tree_of_thoughts import MonteCarloTreeofThoughts

openai_api_model = "gpt-3.5-turbo"

guidance.llm = guidance.llms.OpenAI(openai_api_model)
tot_model = OpenAILanguageModel(api_key='api key', api_model=openai_api_model)
tree_of_thoughts = MonteCarloTreeofThoughts(tot_model)

# we can pre-define valid option sets
valid_weapons = ["sword", "axe", "mace", "spear", "bow", "crossbow"]

# define the prompt
program = guidance("""The following is a character profile for an RPG game in JSON format.
```json
{
    "description": "{{description}}",
    "name": "{{gen 'name'}}",
    "age": {{gen 'age' pattern='[0-9]+' stop=','}},
    "armor": "{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}",
    "weapon": "{{select 'weapon' options=valid_weapons}}",
    "class": "{{gen 'class'}}",
    "mantra": "{{gen 'mantra'}}",
    "strength": {{gen 'strength' pattern='[0-9]+' stop=','}},
    "items": [{{#geneach 'items' num_iterations=3}}
        "{{gen 'this'}}",{{/geneach}}
    ]
}```""")

# execute the prompt
response = tree_of_thoughts.solve(
    program(description="A quick and nimble fighter.", valid_weapons=valid_weapons)
    )

https://github.com/microsoft/guidance/raw/main/docs/figures/json_animation.gif

That's not the most elegant API, but it gives you a general idea of how those two libraries can significantly increase the quality and reproducibility of outputs. Guidance can be used for all kinds of prompts; please look at their documentation, as it should explain its uses and benefits much better than I could hope to do.

tot_dfs() got an unexpected keyword argument 'confidence_threshold'

Getting few errors likes this, should I fix it?

tree-of-thoughts\tree_of_thoughts\treeofthoughts.py", line 45, in solve
result = self.tot_dfs(initial_prompt, num_thoughts, max_steps, value_threshold,
TypeError: tot_dfs() got an unexpected keyword argument 'confidence_threshold'

Support for ChatGPT with plugins

Thank you for implementing the ToT paper.
I have a question though, is there anyway where this can be used with GP-4 running on the web ChatGPT with plugins?
If not are there any roadmap or some thoughts of how to achieve this?
Thanks

print_tree throws errors at final solution

I attempted to use this tonight (May 27th 1.30est), I just used the standard example as documented, it ran for a while, appears to get close to the end as I see a logged final solution but it looks like only the first element of the list:

(python v3.10.11)

python3 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"

I got the following errors:

Traceback (most recent call last):
  File "/Users/seandearnaley/Documents/GitHub/tree-of-thoughts/tree_of_thoughts/treeofthoughts.py", line 764, in <module>
    trees = optimized_tree_of_thoughts.print_tree(final_solution)
  File "/Users/seandearnaley/Documents/GitHub/tree-of-thoughts/tree_of_thoughts/treeofthoughts.py", line 679, in print_tree
    node = self.tree["nodes"][x]
TypeError: list indices must be integers or slices, not list

openai.logs.zip

Evaluations from Paper

Action Item: Create a list of example evaluations and measure performance with the following from the paper https://arxiv.org/pdf/2305.10601.pdf

some potential eval metrics:

Accuracy - Measure how often the model correctly predicts the output.

F1 Score - Measure the trade-off between precision and recall.

Precision - Measure the number of correctly predicted positive results out of the total number of positive predictions.

Recall - Measure the number of correctly predicted positive results out of the total number of actual positive instances.

Mean Average Precision (mAP) - Measure how well the model performs across multiple classes.

Receiver Operating Characteristic (ROC) - Measure the true positive rate against the false positive rate.

Area Under Curve (AUC) - Measure the ROC curve's performance.

Mean Squared Error (MSE) - Measure the average squared difference between the predicted and actual values.

Mean Absolute Error (MAE) - Measure the average absolute difference between the predicted and actual values.

Add integration to guidance library

Thanks a lot for implementing this ToT library, nice work!

I think it could benefit a lot from using Microsoft Guidance (https://github.com/microsoft/guidance), specially with smaller local models that have a hard time following the instructions exactly.

Is this something you would be interested in?

Sadly, I’m low on time this week (and the next) to help with the implementation, but I would eventually look into making this integration (if you have no interest in making it yourself).

git clone or folder download result in erorrs.

When i git clone I get this:

remote: Enumerating objects: 1052, done.
remote: Counting objects: 100% (344/344), done.
remote: Compressing objects: 100% (130/130), done.
remote: Total 1052 (delta 251), reused 238 (delta 212), pack-reused 708
Receiving objects: 100% (1052/1052), 5.69 MiB | 9.82 MiB/s, done.
Resolving deltas: 100% (633/633), done.
error: invalid path 'logs/tree_of_thoughts_output_


Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 /  2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:

fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

besides the zipped folder can't be opened.

git clone or folder download result in error: invalid path 'logs/tree_of_thoughts_output_. Please don't close the issue until a fix is implemented as happened #63

On multiple machines, windows 11, anaconda, when i git clone I get this, (not happening with other repos):

remote: Enumerating objects: 1052, done.
remote: Counting objects: 100% (344/344), done.
remote: Compressing objects: 100% (130/130), done.
remote: Total 1052 (delta 251), reused 238 (delta 212), pack-reused 708
Receiving objects: 100% (1052/1052), 5.69 MiB | 9.82 MiB/s, done.
Resolving deltas: 100% (633/633), done.
error: invalid path 'logs/tree_of_thoughts_output_


Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 /  2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 equation
Possible next steps:

fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

besides when downloading the zipped folder it can't be opened (on multiple machines) not happening with other repos

UnboundLocalError: local variable 'prompt'

Getting this error from the example.py

\tree-of-thoughts\tree_of_thoughts\treeofthoughts.py", line 144, in evaluate_states
prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, on the probability of this state of reasoning achieveing {prompt} and NOTHING ELSE:"
UnboundLocalError: local variable 'prompt' referenced before assignment

Any ideas on how to overpass the RPM by OpenAI?

Hi,

I'm just testing this algorithm and I just created a Open AI's free trial account. I've got this runtime error message:

"Rate limit reached for default-gpt-3.5-turbo in organization org-R9Srq5XHNmyYxqITd0OZfUGu on requests per min. Limit: 3 / min."

max() arg is an empty sequence

Not reaching past Step 1 with the following error:

`2023-05-29 09:11:32,869 - INFO - Step: 2, S0_t: set(), Vt: {}, St: [], S0: set()

Saving tree

2023-05-29 09:11:32,870 - INFO - Step: 3, S0_t: set(), Vt: {}, St: [], S0: set()

2023-05-29 09:11:32,870 - ERROR - Error: max() arg is an empty sequence

2023-05-29 09:11:32,870 - INFO - Saving the current tree and metrics.`

MIT or Apache ?

Great stuff, thanks. I noticed the Apache 2.0 license is included as a file, while setup.py says MIT. Just thought I'd point out the inconsistency.

GPT3.5 and GPT4 Config

Allow user to specify what model they want

model = optimizedlanguage(model="gpt4", api_key="ssk")

PrivateGPT integration

Hi, @kyegomez i wanted to ask whether you can make a repository so that I can run llm locally like privategpt and also use the tree of thoughts prompting technique, or is there any way by which I can implement tot in privateGPT itself. Thank you.

def solve arguments don't match example.py solve call

In example.py:
solution = tree_of_thoughts.solve(input_problem, k, T, b, vth, timeout, confidence_threshold=confidence, max_iterations=max_iterations, convergence_threshold=convergence_threshold, convergence_count=convergence_count)

but in treeofthoughts.py:

class TreeofThoughts
...
def solve(self, x, k, T, b, vth, timeout=None)

TypeError: TreeofThoughts.solve() got an unexpected keyword argument 'T'

C:\Users\ivanr\tree-of-thoughts\tree_of_thoughts>python3 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"
Namespace(problem='design an new transportation system for an all-new city', version=1, search_algorithm='BFS', k=3, T=10, b=5, vth=0.4, timeout=10, confidence=0.8, max_iterations=40, convergence_threshold=0.01, convergence_count=5)
Using api_model text-davinci-003
Traceback (most recent call last):
File "C:\Users\ivanr\tree-of-thoughts\tree_of_thoughts\treeofthoughts.py", line 630, in
best_state = optimized_tree_of_thoughts.solve(args.problem, k=args.k, T=args.T, b=args.b, vth=args.vth)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TreeofThoughts.solve() got an unexpected keyword argument 'k'

No module named 'transformers'

ModuleNotFoundError                       Traceback (most recent call last)
Input In [1], in <cell line: 1>()
----> 1 from tree_of_thoughts.treeofthoughts import OpenAILanguageModel, CustomLanguageModel, TreeofThoughts, OptimizedOpenAILanguageModel, OptimizedTreeofThoughts, HuggingLanguageModel
      3 use_v2 = False
      5 api_key=""

File ~/opt/anaconda3/envs/tensorflow/lib/python3.9/site-packages/tree_of_thoughts/__init__.py:1, in <module>
----> 1 from tree_of_thoughts.treeofthoughts import TreeofThoughts, CustomLanguageModel, OptimizedOpenAILanguageModel, OptimizedTreeofThoughts, HuggingLanguageModel, HFPipelineModel
      2 from experiements.extremely_experimental.prompting.LangChain_model import LangchainCustomLanguageModel

File ~/opt/anaconda3/envs/tensorflow/lib/python3.9/site-packages/tree_of_thoughts/treeofthoughts.py:9, in <module>
      7 import guidance
      8 import time
----> 9 from transformers import AutoModelForCausalLM, AutoTokenizer
     10 from transformers import pipeline
     11 import heapq

ModuleNotFoundError: No module named 'transformers'

take down paper figure

hi, since I made the paper figure, may I kindly ask you to take down the paper figure from readme? Thanks

Claude?

Good day, hardworkers!

I wonder if there is a way to use it with Claude AI?
It seems it can work with openAI, but can it work with Anthropic models?

Thank you for answer,
I

Upvote & Fund

  • We're using Polar.sh so you can upvote and help fund this issue.
  • We receive the funding once the issue is completed & confirmed by you.
  • Thank you in advance for helping prioritize & fund our backlog.
Fund with Polar

Mismatch in number of arguments for TreeofThoughts.tot_bfs()

Inside solve() tot_bfs() is called with args

initial_prompt, num_thoughts, max_steps, max_states, value_threshold, pruning_threshold

but in the method definition value_threshold does not appear. This mismatch causes a crash when running example.

Errors come one by one when deploying

The installation guide in the Readme doesn't work for me. It need to be updated in Mac, Python 3.10.9.

git clone https://github.com/kyegomez/tree-of-thoughts
cd tree-of-thoughts
cd tree_of_thoughts
python3 treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"

error info:

(1) Error 1 -- [solved]

Updated:

  • Linux environment works, maybe some thing wrong with my Mac.
  • No need to execcute pip install .

ModuleNotFoundError: No module named 'tree_of_thoughts'

Then, installed from source code

pip install .

(2) Error 2 -- [solved]

The error above disappeared, while another one poped up

ModuleNotFoundError: No module named 'dotenv'

I tried to install dotenv

TQV9MF4NXR:tree-of-thoughts bytedance$ pip install dotenv
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting dotenv
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e2/46/3754073706e31670eed18bfa8a879305b56a471db15f20523c2427b10078/dotenv-0.0.5.tar.gz (2.4 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... error
  error: subprocess-exited-with-error
  
  × pip subprocess to install backend dependencies did not run successfully.
  │ exit code: 1
  ╰─> [30 lines of output]
      Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
      Collecting distribute
        Using cached https://pypi.tuna.tsinghua.edu.cn/packages/5f/ad/1fde06877a8d7d5c9b60eff7de2d452f639916ae1d48f0b8f97bf97e570a/distribute-0.7.3.zip (145 kB)
        Installing build dependencies: started
        Installing build dependencies: finished with status 'done'
        Getting requirements to build wheel: started
        Getting requirements to build wheel: finished with status 'done'
        Preparing metadata (pyproject.toml): started
        Preparing metadata (pyproject.toml): finished with status 'error'
        error: subprocess-exited-with-error
      
        × Preparing metadata (pyproject.toml) did not run successfully.
        │ exit code: 1
        ╰─> [6 lines of output]
            usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
               or: setup.py --help [cmd1 cmd2 ...]
               or: setup.py --help-commands
               or: setup.py cmd --help
      
            error: invalid command 'dist_info'
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
      error: metadata-generation-failed
      
      × Encountered error while generating package metadata.
      ╰─> See above for output.
      
      note: This is an issue with the package mentioned above, not pip.
      hint: See above for details.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

pip install dotenv failed, and I tried another package py-dotenv, then updated the file guidanceModels.py

#from dotenv import load_dotenv
#load_dotenv()

from py_dotenv import read_dotenv

#dotenv_path = os.path.join(os.path.dirname(__file__), '.env')
dotenv_path = "/root/tree-of-thoughts/.env"
read_dotenv(dotenv_path)

Finally, it works.

Error 3 -- [solved]

Run command:

python tree_of_thoughts/treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"

New error:

(base) mlxlabgjosxvkh644ba1db-20230428103717-lefyc9-master:tree-of-thoughts# python tree_of_thoughts/treeofthoughts.py --problem "design an new transportation system for an all-new city" --search_algorithm="BFS"
2023-05-29 18:36:03.549785: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-29 18:36:03.616323: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Namespace(problem='design an new transportation system for an all-new city', version=1, search_algorithm='BFS', k=3, T=10, b=5, vth=0.4, timeout=10, confidence=0.8, max_iterations=40, convergence_threshold=0.01, convergence_count=5)
Using api_model text-davinci-003
Traceback (most recent call last):
  File "/root/tree-of-thoughts/tree_of_thoughts/treeofthoughts.py", line 231, in <module>
    best_state = optimized_tree_of_thoughts.solve(args.problem, k=args.k, T=args.T, b=args.b, vth=args.vth)
TypeError: TreeofThoughts.solve() got an unexpected keyword argument 'k'

TypeError: TreeofThoughts.solve() got an unexpected keyword argument 'k'

Why?

  • The method solve in superclass TreeofThoughts is overloaded by the subclass OptimizedTreeofThoughts.
  • So, the argument 'k' was not identified.

How to solve?

  • Modify this code line 228 in file treeofthoughts.py
    #optimized_tree_of_thoughts = TreeofThoughts(model, search_algorithm=args.search_algorithm)
    optimized_tree_of_thoughts = OptimizedTreeofThoughts(model, search_algorithm=args.search_algorithm)

Summary

A a lot of errors to overcome by myself... The readme file is outdated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.