BLING Models
Collection
Small CPU-based RAG-optimized, instruct-following 1B-3B parameter models • 27 items • Updated • 28
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-tiny-llama-onnx")
model = AutoModelForCausalLM.from_pretrained("llmware/bling-tiny-llama-onnx")bling-tiny-llama-onnx is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
Base model
llmware/bling-tiny-llama-v0
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="llmware/bling-tiny-llama-onnx")