Recursive Reasoner Planner
A fine-tuned version of meta-llama/Llama-3.1-8B-Instruct for planning in recursive reasoning systems.
Purpose
This model serves as a planner that analyzes math problems and decides whether to:
- Decompose the problem into subproblems, or
- Solve directly (atomic)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vkaarti/recursive-reasoner-planner-llama3.1-8b")
tokenizer = AutoTokenizer.from_pretrained("vkaarti/recursive-reasoner-planner-llama3.1-8b")
prompt = '''Analyze this math problem and decide how to solve it:
Problem: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?
Your task:
1. Decide if this problem should be decomposed into subproblems or solved directly
2. If decomposing: identify 2-3 distinct calculation steps as subproblems
3. Explain your approach briefly
Return ONLY valid JSON:
{
"should_decompose": true/false,
"subproblems": ["step 1", "step 2", ...] or [] if atomic,
"plan": "Brief explanation of approach"
}'''
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Output Format
The model outputs JSON with:
should_decompose: Boolean indicating whether to break down the problemsubproblems: List of subproblem descriptions (if decomposing)plan: Brief explanation of the approach
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for vkaarti/recursive-reasoner-planner-llama3.1-8b
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct