Tree of Thoughts

Tree of Thoughts

GITHUB.COM
176
74
kevinslin
4d

Comments

@rahimnathwani 4d
Here are the prompts templates from the main code:

  prompt = f"Given the current state of reasoning: '{state_text}', pessimitically evaluate its value as a float between 0 and 1 based on it's potential to achieve {inital_prompt}"

  prompt = f"Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx Given the current state of reasoning: '{state_text}', generate {k} coherent solutions to achieve {state_text}"

  prompt = f"Given the current state of reasoning: '{state_text}', pessimistically evaluate its value as a float between 0 and 1 based on its potential to achieve {initial_prompt}"

  self.ReAct_prompt = "Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx'."

  prompt = f"Given the current state of reasoning: '{state_text}', generate {1} coherent thoughts to achieve the reasoning process: {state_text}"

  prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, become very pessimistic think of potential adverse risks on the probability of this state of reasoning achieveing {inital_prompt} and DO NOT RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"

  prompt = f"Given the following states of reasoning, vote for the best state utilizing an scalar value 1-10:\n{states_text}\n\nVote, on the probability of this state of reasoning achieveing {inital_prompt} and become very pessimistic very NOTHING ELSE"

  self.ReAct_prompt = '''{{#assistant~}}
    {{gen 'Observation' temperature=0.5 max_tokens=50}}
    {{~/assistant}}'''

There are also some system prompts: https://github.com/kyegomez/tree-of-thoughts/blob/732791710e...
@m3kw9 4d
It’d be nice to include a few example uses and it’s outputs vs other prompt methods.
@dventimihasura 4d
@tyropita 4d
Documentation looks really neat and in-depth, always appreciated. Looks like you’re missing a .gitignore file. Folders like __pycache__ don’t need to be checked in.
@doctoboggan 4d
This seems really interesting. I am glad many of these tools built up around LLMs allow you to bring your own rather than rely on OpenAI.
@peter_l_downs 4d
The author appears motivated by some... interesting... beliefs. Hard to tell if this entire thing is a joke or not.

https://github.com/kyegomez/EXA#for-humanity

https://blog.apac.ai/liberation-awaits

EDIT: the author seems to be releasing poor implementations of recent papers in an attempt to drive attention towards an AI-related death cult.

@xg15 4d
> This is an plug in and play version, connect your own models and enjoy superintelligence!

Share this repository by clicking on the following buttons! <smiley face>

2023 in a nutshell.

@GreedClarifies 4d
This path feels correct to me. It feel like what we do as humans and seems like a reasonable way to start to construct "mode 2" thinking.

IDK if our current models have enough of "mode 1" to power this system. It's also plausible that our current "mode 1" systems are more than powerful enough and that inference speed (and thus the size/depth of the tree that can be explored) will be the most important factor.

I hope that the major players are looking at this and trying it out at scale (I know Deepmind wrote the orginal paper, but their benchmarks were quite unimpressive). It's plausible that we will have an AlphaGo moment with this scheme.

@startupsfail 4d
Checking, if GPT could be improved by running it multiple times is a good idea.

The answer to that is - yes, but it is: costly, slow, there is node collapse, it impacts context length, it injects biases.

@ChrisAlexiuk 4d
https://youtu.be/bjnTy2TdmYw

I went through this in a video using the paper's official code - and it worked fairly well!

Definitely a great step forward in terms of reasoning tasks - even if it is an expensive step.

@raydiatian 4d
> This implementation of Tree of Thoughts is brought to you by Agora, Agora advances Humanity with open source SOTA Multi-Modality AI research! We plan on combating Humanity's grandest root problems like food insecurity, planetary insecurity, and disease, and hopefully death itself.

Wow. Lick, don’t sniff, the fresh paint.

@rahimnathwani 4d
Another use of ideas from the same paper, but this time to produce lesson plans for an AI tutor:

https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor/tree/testin...

@Jeff_Brown 3d
A claim like "improves reasoning by 70%" is too specific to be accompanied by neither a citation nor a definition.
@emmanueloga_ 4d
Similar in concept to the magi supercomputer? :-p [1]

1: https://aminoapps.com/c/neon-genesis-evangelion/page/item/ma...

@flakiness 3d
Note that the repo author != the paper author.

The research itself [1] seems legit. The paper author also wrote a paper called ReAct [2], which is one of the core components of the langchain framework.

* [1] https://arxiv.org/abs/2305.10601 * [2] https://arxiv.org/abs/2210.03629