Tree of Thoughts
Comments
prompt = f"Given the current state of reasoning: '{state_text}', pessimitically evaluate its value as a float between 0 and 1 based on it's potential to achieve {inital_prompt}"
prompt = f"Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx Given the current state of reasoning: '{state_text}', generate {k} coherent solutions to achieve {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', pessimistically evaluate its value as a float between 0 and 1 based on its potential to achieve {initial_prompt}"
self.ReAct_prompt = "Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx'."
prompt = f"Given the current state of reasoning: '{state_text}', generate {1} coherent thoughts to achieve the reasoning process: {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, become very pessimistic think of potential adverse risks on the probability of this state of reasoning achieveing {inital_prompt} and DO NOT RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"
prompt = f"Given the following states of reasoning, vote for the best state utilizing an scalar value 1-10:\n{states_text}\n\nVote, on the probability of this state of reasoning achieveing {inital_prompt} and become very pessimistic very NOTHING ELSE"
self.ReAct_prompt = '''{{#assistant~}}
{{gen 'Observation' temperature=0.5 max_tokens=50}}
{{~/assistant}}'''
There are also some system prompts: https://github.com/kyegomez/tree-of-thoughts/blob/732791710e...
https://github.com/kyegomez/EXA#for-humanity
https://blog.apac.ai/liberation-awaits
EDIT: the author seems to be releasing poor implementations of recent papers in an attempt to drive attention towards an AI-related death cult.
Share this repository by clicking on the following buttons! <smiley face>
2023 in a nutshell.
IDK if our current models have enough of "mode 1" to power this system. It's also plausible that our current "mode 1" systems are more than powerful enough and that inference speed (and thus the size/depth of the tree that can be explored) will be the most important factor.
I hope that the major players are looking at this and trying it out at scale (I know Deepmind wrote the orginal paper, but their benchmarks were quite unimpressive). It's plausible that we will have an AlphaGo moment with this scheme.
The answer to that is - yes, but it is: costly, slow, there is node collapse, it impacts context length, it injects biases.
I went through this in a video using the paper's official code - and it worked fairly well!
Definitely a great step forward in terms of reasoning tasks - even if it is an expensive step.
Wow. Lick, don’t sniff, the fresh paint.
https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor/tree/testin...
1: https://aminoapps.com/c/neon-genesis-evangelion/page/item/ma...
The research itself [1] seems legit. The paper author also wrote a paper called ReAct [2], which is one of the core components of the langchain framework.
* [1] https://arxiv.org/abs/2305.10601 * [2] https://arxiv.org/abs/2210.03629