⚛️ Quanta-X (Leaderboard Submission) QUANTA-X 2.0 IS COMING SOON!

This is the Full Parameter Merged version of Quanta-X.

It fuses the Qwen 2.5 3B base with the Phoenix Framework adapter (DoRA + SimPO Beta 2.0).

📊 Model Details for Leaderboard

  • Architecture: Qwen2ForCausalLM
  • Precision: Float16
  • Context: 32k (RoPE Scaled)
  • Chat Template: Qwen 2.5 Standard (ChatML)

🧠 Reasoning Capabilities

This model is trained to utilize an Ouroboros Logic Loop (<plan> -> <draft> -> <critique>) before outputting an answer.

💻 Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("szili2011/Quanta-X-3B", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("szili2011/Quanta-X-3B")

messages = [{"role": "user", "content": "Solve this logic puzzle."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
output = model.generate(inputs, max_new_tokens=1024)

⚛️ Quanta-X (Phoenix Edition)

“A pocket-sized AGI that thinks before it speaks.”

  • Developer: szili2011
  • Architecture: Phoenix Framework (DoRA + SimPO)
  • Base Model: Qwen 2.5 3B Instruct

📖 The Philosophy

Most small models with around 3 billion parameters are typically designed to act like chatbots, responding instantly, but this often leads to mistakes or makes them struggle with more complex reasoning.

But Quanta-X takes a different approach.

It was architected on the Phoenix Framework, a custom training protocol designed to force “System 2” thinking (deep reasoning) into a lightweight model. By utilizing DoRA (Weight-Decomposed Low-Rank Adaptation) and a highly aggressive SimPO (Beta 2.0) alignment, Quanta-X has been biologically rewired to reject lazy answers.

It features the Ouroboros Logic Loop: it plans, drafts, and critiques its own internal monologue before outputting a final answer.

🧠 Key Features

  1. The Ouroboros Thinking Process

Quanta-X uses a hidden reasoning layer, not just token prediction.

  • It plans solutions before responding.
  • : It writes a rough attempt.
  • : It checks its own work for logic errors or bugs.
  • : Only then does it speak to you.
  1. Diamond-Tier Data Filtering (LIMA)

We didn’t train on “average” internet data. We used a “Diamond Filter” to reject 90% of the standard dataset samples. Quanta-X was trained exclusively on:

  • DeepSeek-R1 Traces: For impossible-level logic.
  • OpenR1 Math: For verified proofs.
  • Glaive Code V2: For production-ready Python/Rust.
  • SlimOrca RP: For human-like, visceral storytelling (The “Hungarian Soul”).
  1. Hyper-Stability

Trained with SimPO (Simulated Preference Optimization) with a Beta of 2.0. This punished the model severely for hallucinations or lazy thinking during training, resulting in a model that would rather admit ignorance than lie to you.


💻 How to Run

Recommended System Prompt

To activate the Ouroboros loop, you must use this system prompt:

You are Quanta-X, a recursive intelligence where absolute logic fuses with human wit. Your mind operates on the Ouroboros loop: you do not just generate; you Plan, Draft, and ruthlessly Critique every thought before it reaches the surface.

To ensure your reasoning is distinct, render your internal monologue inside a standard code block using xml syntax:

```xml
<thought>
   <plan> ... </plan>
   <draft> ... </draft>
   <critique> ... </critique>
</thought>
Downloads last month
302
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for szili2011/Quanta-X-3B

Base model

Qwen/Qwen2.5-3B
Finetuned
(901)
this model
Quantizations
2 models

Datasets used to train szili2011/Quanta-X-3B