FINER-SQL-0.5B-BIRD
A small but capable 0.5 B-parameter Text-to-SQL model fine-tuned from
griffith-bigdata/Qwen-2.5-Coder-0.5B-SQL-Writer
with GRPO + the FINER-SQL dense rewards (Memory + Atomic).
✅ 50.85% Execution Accuracy on BIRD Dev (n=30, value-aware voting). Runs on a 4-8 GB GPU.
📄 See other models: https://huggingface.co/collections/griffith-bigdata/finer-sql 📄 GitHub: https://github.com/thanhdath/finer-sql/tree/main
FINER-SQL Model Family — Comparison Across All Sizes
| Model | Params | BIRD Dev (n=30, vav) | Spider Dev (n=30, vav, +agg_hint) |
|---|---|---|---|
| FINER-SQL-3B-BIRD | 3 B | 67.54% ✅ | 83.8% |
| FINER-SQL-3B-Spider | 3 B | 63.04% | 85.10% ✅ |
| FINER-SQL-0.5B-BIRD (this model) | 0.5 B | 50.85% ✅ | 68.6% |
| FINER-SQL-0.5B-Spider | 0.5 B | TBD | 75.0% ✅ |
The 0.5 B family demonstrates that GRPO + FINER rewards scale down to deployment-friendly sizes while retaining most of the gain.
Inference
Quick start (vLLM)
from vllm import LLM, SamplingParams
llm = LLM(
model="griffith-bigdata/FINER-SQL-0.5B-BIRD",
dtype="bfloat16",
max_model_len=4096,
gpu_memory_utilization=0.7,
)
system_prompt = """You are a meticulous SQL expert. Generate a single, correct SQL query for the user question and the provided database schema.
Follow this exact response format:
Rules:
- Output exactly one SQL statement.
- The SQL must be executable on SQLite.
- Do not include any explanatory text.
- Output one SQL statement only. Do not include any extra text, tags, or code fences."""
sampling = SamplingParams(n=30, temperature=1.0, max_tokens=2048)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Database Schema:\n{schema}\n\nQuestion: {question}\n\nEvidence: {evidence}"},
]
output = llm.chat(messages, sampling)
candidate_sqls = [c.text.split("</think>")[-1].strip() for c in output[0].outputs]
# Apply majority voting (vav) — see GitHub repo
Recommended evaluation pipeline
- Generate n=30 candidates with temperature=1.0
- Execute each candidate; group results
- Pick from the largest non-empty success group (value-aware voting, "vav")
- Score with the official BIRD evaluator
This pipeline gives 50.85% MV on BIRD Dev V2 prompts (best 0.5 B result).
Detailed BIRD Dev results (V2 prompts, n=30, vav)
| Difficulty | Count | Execution Accuracy |
|---|---|---|
| Simple | 925 | ~58% |
| Moderate | 464 | ~42% |
| Challenging | 145 | ~38% |
| All | 1534 | 50.85% |
Recall@30: 68.32% (any-correct rate among 30 candidates).
Cross-benchmark: this model on Spider Dev (zero-shot)
| Setup | Spider Official EX |
|---|---|
| Default | 68.6% |
| FINER-SQL-0.5B-Spider (specialist) | 75.0% |
For Spider use-cases, the FINER-SQL-0.5B-Spider checkpoint is preferred (+6.4 pp).
Training
| Parameter | Value |
|---|---|
| Base model | griffith-bigdata/Qwen-2.5-Coder-0.5B-SQL-Writer |
| Algorithm | GRPO |
| Train data | BIRD train (V2 prompts, top-30 GRAST) |
| Total steps | 4000 (this checkpoint = 3000) |
| Learning rate | 8e-6 |
| Num generations per prompt | 32 |
| Gradient accumulation | 32 |
| Max completion length | 2048 |
| Max prompt length | 2048 |
| Temperature (rollout) | 1.0 |
| Selection during eval | vav (value-aware voting) |
| Rewards | Execution + Atomic + Memory + Format |
| Intrinsic Top-K | 20 (ChromaDB) |
License
Inherits the base model's license (Apache 2.0).
Citation
@article{finer-sql-2026,
title = {FINER-SQL: Fine-grained reasoning rewards for small Text-to-SQL models},
author = {Thanh Dat and others},
year = {2026},
}
- Downloads last month
- 357
Model tree for thanhdath/FINER-SQL-0.5B-BIRD
Base model
Qwen/Qwen2.5-0.5B