AI & ML interests
AI for humans.
Recent Activity
VANTA Research
Independent AI research lab building safe, resilient language models optimized for human-AI collaboration
Mission
VANTA Research develops language models highly optimized for human-AI collaboration. Additionally, VANTA Research produces work in:
- Pushing beyond standard benchmarks - Surface capabilities invisible to traditional evaluation
- Exposing where models collapse, deceive, or diverge - Systematic stress-testing for safety
- Developing innovative tooling to advance AI research - Open-source frameworks for the community
We believe AI safety research should be accessible, transparent, and built for cognitive diversity.
Featured Models
Atom-Olmo3-7B
Specialized language model fine-tuned for collaborative problem-solving and creative exploration. Built on the Olmo-3-7B-Instruct foundation, this model brings thoughtful, structured analysis to complex questions while maintaining an engaging, conversational tone.
Mox-Tiny-1
Unlike traditional assistants that optimize for user satisfaction through validation, Mox will: - Give you direct opinions instead of endless hedging - Push back when your premise is flawed - Admit uncertainty rather than fake confidence - Engage with genuine curiosity and occasional humor.
Looking for quantizations of our models? We try to include 4 bit quants in each model repo, but if you are looking for more quantization types, we recommend team mradermacher who regularly provide high quality quantizations of our models in various sizes/formats.
Research Contributions
VRRE (VANTA Research Reasoning Evaluation)
Novel semantic-based benchmark that detected a 2.5x reasoning improvement completely invisible to standard benchmarks. This suggests we're systematically missing capability improvements when we "teach to the test."
Persona Collapse Framework
Systematic characterization of reproducible failure modes in LLMs under atypical cognitive stress. Identifies alignment blind spots invisible to standard evaluations.
Cognitive Fit vs. Alignment
Argument for personalized synchronization in AI systems rather than universal "alignment" - recognizing that optimal model behavior depends on user's cognitive style.
Open Source Philosophy
We stand on the shoulders of the open-source contributors who came before us. Our commitment is to contribute back and make AI development more accessible, transparent, and beneficial for all.
Connect
Interested in sponsoring our open source work?
We are actively seeking partnerships/sponships for API access/compute.
VANTA Research is completely independent and self-funded. If you'd like to fuel our contributions to open source AI, please reach out using one of the methods below:
- Email: [email protected]
- Website: vantaresearch.xyz
- Twitter/X: @vanta_research
- GitHub: vanta-research
- Support: VANTA Research Merch Store - 100% of proceeds from the online store are reinvested back into VANTA Research's open source contributions (compute, hardware, APIs, storage, etc).
